Unleashing the power of machine learning models in banking through explainable artificial intelligence (XAI)

  • “The emerging field of explainable AI (or XAI) can help banks navigate issues of transparency and trust, and provide greater clarity on their AI governance. XAI aims to make AI models more explainable, intuitive, and understandable to human users without sacrificing performance or prediction accuracy. Explainability is also becoming a more pressing concern for banking regulators who want to be assured that AI processes and outcomes are “reasonably understood” by bank employees.
  • Specifically, we consider the following questions: 1) How should banks weigh the benefits of explainability against potential reductions in accuracy and performance? 2) What’s the most effective way for AI teams to prioritize efforts that enhance transparency across the model development pipeline? 3) Which models should be the biggest priority/focus of explainability? And 4) How should banks deploy their limited resources to ensure explainability across their model inventory?”


XAI—Explainable artificial intelligence

  • “For many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners
  • The purpose of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations.
  • Accuracy versus interpretability – Interpretability needs to consider tradeoffs involving accuracy and fidelity and to strike a balance between accuracy, interpretability, and tractability.”