
- “The emerging field of explainable AI (or XAI) can help banks navigate issues of transparency and trust, and provide greater clarity on their AI governance. XAI aims to make AI models more explainable, intuitive, and understandable to human users without sacrificing performance or prediction accuracy. Explainability is also becoming a more pressing concern for banking regulators who want to be assured that AI processes and outcomes are “reasonably understood” by bank employees.
- Specifically, we consider the following questions: 1) How should banks weigh the benefits of explainability against potential reductions in accuracy and performance? 2) What’s the most effective way for AI teams to prioritize efforts that enhance transparency across the model development pipeline? 3) Which models should be the biggest priority/focus of explainability? And 4) How should banks deploy their limited resources to ensure explainability across their model inventory?”
https://www2.deloitte.com/uk/en/insights/industry/financial-services/explainable-ai-in-banking.html