Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion
DOI:
https://doi.org/10.18034/ei.v3i2.519Keywords:
Trustworthiness, Artificial Intelligence, TAI PrincipleAbstract
Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.
Downloads
References
Acemoglu, Daron, and Pascual Restrepo, “Artificial Intelligence, Automation and Work,” MIT Economics, January 4, 2018, https://economics.mit.edu/files/14641.
Akata, Zeynep, Trevor Darrell, Lisa Anne Hendricks, Dong Huk Park, Marcus Rohrbach, and Bernt Schiele, “Attentive Explanations: Justifying Decisions and Pointing to the Evidence.” ARXIV, December 2016. https://arxiv.org/pdf/1612.04757v1.pdf.
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica, 23 May 2016. https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.
Armstrong, Stuart. “AI safety: three human problems and one AI issue.” Intelligent Agent Foundations Forum, May 19, 2017. https://agentfoundations.org/item?id=1388.
Doshi-Velez Kortz; Villani, C. (2018). “For a Meaningful Artificial In-telligence: Towards A French and European Strategy.” Ai for Humanity. Online: https://www.aiforhumanity.fr/pdfs/MissionVillani-Report-ENG-VF.pdf
House of Lords, Select Committee on Artificial Intelligence. (2018). “AI in the UK: ready, willing and able?”, Yeung, K. quoted at 96. Online: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf; Larus, J. et al. (2018). “When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making.” Informatics Europe EUACM. Online: http://w ww.informatics-europe.org/component/phocadow nload/category/10- reports.html?dow nload=74:automated-decision-making-report
Reisman, D. et al. (2018); House of Lords, 35.
--0--
Downloads
Published
Issue
Section
License
Engineering International is an Open Access journal. Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal the right of first publication with the work simultaneously licensed under a CC BY-NC 4.0 International License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of their work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal. We require authors to inform us of any instances of re-publication.