The rise of explainable AI
The promise of AI suggests machines will enhance human understanding by automating decision-making. With greater reliance on AI and machine learning comes human hesitation about the trustworthiness of model-driven recommendations. Rightfully so, as many machine learning applications don’t offer a transparent way to see the algorithms or logic behind decisions and recommendations. As Adrian Weller, a senior research fellow at University of Cambridge outlines, “Transparency is deemed critical to enable effective real-world deployment of intelligent systems.” This need for transparency is driving growth of explainable AI—the practice of understanding and presenting transparent views into machine learning models.