TRANSPARENCY AND PRIVACY: THE ROLE OF EXPLAINABLE AI AND

Authors

  • Himabindu Chinni Author
  • Chithaluri Narsimha Author
  • Mohammad Abdul Waheed Farooqui Author
  • Ithagoni Tejaswini Author

Keywords:

AI (XAI)

Abstract

Many people are worried about the lack of transparency and privacy caused by the
widespread use of artificial intelligence (AI) in many industries. Although AI systems
have been great at automating decision-making, their opaque design makes them
difficult to understand and trust, which is particularly problematic in delicate fields like
medicine, banking, and law enforcement. In this work, we take a look at how
explainable AI (XAI) can help with these issues by revealing how it can make AI
models more transparent without letting users' privacy be compromised. In order to
help stakeholders understand how and why particular results are reached, explainable
AI strategies try to simplify the decision-making process of complicated models.
Simultaneously, methods that safeguard personal information don't compromise AI
systems' ability to do their jobs. This article discusses the potential of explainable AI
and privacy-preserving AI to find a middle ground between openness, accountability,
and privacy by reviewing the existing state-of-the-art approaches in these areas. The
research goes on to provide a paradigm that would allow for more trustworthy,
interpretable, and transparent AI systems by integrating explainable AI with privacy
protection. Our purpose is to provide a thorough study that will help people understand
how XAI and privacy strategies work together to make people more trust AI-driven
systems and promote the ethical deployment of AI.

Downloads

Download data is not yet available.

Downloads

Published

29-12-2022

How to Cite

TRANSPARENCY AND PRIVACY: THE ROLE OF EXPLAINABLE AI AND. (2022). International Journal of Information Technology and Computer Engineering, 10(4), 247-256. https://ijitce.org/index.php/ijitce/article/view/825