Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion

Authors

  • Siddhartha Vadlamudi Vintech Solutions

DOI:

https://doi.org/10.18034/ei.v3i2.519

Keywords:

Trustworthiness, Artificial Intelligence, TAI Principle

Abstract

Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative outcomes related to AI are proportionately high, particularly at scale. Most AI is really artificial narrow intelligence, intended to achieve a specific task on previously curated information from a certain source. Since most AI models expand on correlations, predictions could fail to sum up to various populations or settings and might fuel existing disparities and biases. As the AI industry is amazingly imbalanced, and experts are as of now overpowered by other digital devices, there could be a little capacity to catch blunders. With this article, we aim to present the idea of TAI and its five essential standards (1) usefulness, (2) non-maleficence, (3) autonomy, (4) justice, and (5) logic. We further draw on these five standards to build up a data-driven analysis for TAI and present its application by portraying productive paths for future research, especially as to the distributed ledger technology-based acknowledgment of TAI.

 

Downloads

Download data is not yet available.

Author Biography

  • Siddhartha Vadlamudi, Vintech Solutions

    Quixey Inc., Vintech Solutions, USA

References

Acemoglu, Daron, and Pascual Restrepo, “Artificial Intelligence, Automation and Work,” MIT Economics, January 4, 2018, https://economics.mit.edu/files/14641.

Akata, Zeynep, Trevor Darrell, Lisa Anne Hendricks, Dong Huk Park, Marcus Rohrbach, and Bernt Schiele, “Attentive Explanations: Justifying Decisions and Pointing to the Evidence.” ARXIV, December 2016. https://arxiv.org/pdf/1612.04757v1.pdf.

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica, 23 May 2016. https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.

Armstrong, Stuart. “AI safety: three human problems and one AI issue.” Intelligent Agent Foundations Forum, May 19, 2017. https://agentfoundations.org/item?id=1388.

Doshi-Velez Kortz; Villani, C. (2018). “For a Meaningful Artificial In-telligence: Towards A French and European Strategy.” Ai for Humanity. Online: https://www.aiforhumanity.fr/pdfs/MissionVillani-Report-ENG-VF.pdf

House of Lords, Select Committee on Artificial Intelligence. (2018). “AI in the UK: ready, willing and able?”, Yeung, K. quoted at 96. Online: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf; Larus, J. et al. (2018). “When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making.” Informatics Europe EUACM. Online: http://w ww.informatics-europe.org/component/phocadow nload/category/10- reports.html?dow nload=74:automated-decision-making-report

Reisman, D. et al. (2018); House of Lords, 35.

--0--

Downloads

Published

2015-12-21

Issue

Section

Peer Reviewed Articles

How to Cite

Vadlamudi, S. (2015). Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion . Engineering International, 3(2), 105-114. https://doi.org/10.18034/ei.v3i2.519

Similar Articles

21-21 of 21

You may also start an advanced similarity search for this article.