“Kate Vredenburgh says individuals are, in fact, owed explanations when AI makes decisions that affect our lives. Vredenburgh, who is a 2019-2020 postdoctoral fellow at the McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence at Stanford University, will soon start an assistant professorship in the philosophy, logic, and scientific method department at the London School of Economics.”
- “The market for advanced technologies including AI and ML will continue its exponential growth, with market research firm IDC projecting that spending on AI systems will reach $98 billion in 2023, more than two and one-half times the $37.5 billion that was projected to be spent in 2019. Additionally, IDC foresees that retail and banking will drive much of this spending, as the industries invested more than $5 billion in 2019.
- Through real-time monitoring, companies will be given visibility into the “black box” to see exactly how their AI and ML models operate. In other words, explainability will enable data scientists and engineers to know what to look for (a.k.a. transparency) so they can make the right decisions (a.k.a. insight) to improve their models and reduce potential risks (a.k.a. building trust).”
- “For many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners
- The purpose of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations.
- Accuracy versus interpretability – Interpretability needs to consider tradeoffs involving accuracy and fidelity and to strike a balance between accuracy, interpretability, and tractability.”
“The issues of ethics in AI revolve around four key tenets: security, transparency, fairness and liability. Each of these concerns, when addressed, promote trust in the application of artificial intelligence. In other words, the trust generated by ethical use can be viewed as a driving factor to AI acceptance by consumers.
The Facebook Cambridge Analytica scandal brought our data into the spotlight, and GDPR made us think about where our personal data is going.
Google Duplex is a perfect example of the role that transparency plays in the reasonable person’s acceptance of an AI tool. Initially, Google Duplex faced rejection from many due to lack of transparency.
And, as Amazon’s recruitment AI program shows, the reasonable person cares about this bias as well. The ecommerce giant rejected its AI recruitment tool when it taught itself to be sexist against women. The result was a widespread backlash against the use of AI in recruitment.
Artificial intelligence is now making its way into oncology. Here, an incorrect recommendation has a serious impact on the individual involved. When doctors viewed AI recommendations as unsafe, many hospitals pulled the plug.”
“Show Your Work
A Google Brain scientist built a tool that can help artificial intelligence systems explain how they arrived at their conclusions — a notoriously tricky task for machine learning algorithms.
Tools like TCAV are in high demand as AI finds itself under greater scrutiny for the racial and gender bias that plagues artificial intelligence and the training data used to develop it.”