Monitoring is critical to successful AI – TechCrunch

  • “The market for advanced technologies including AI and ML will continue its exponential growth, with market research firm IDC projecting that spending on AI systems will reach $98 billion in 2023, more than two and one-half times the $37.5 billion that was projected to be spent in 2019. Additionally, IDC foresees that retail and banking will drive much of this spending, as the industries invested more than $5 billion in 2019.
  • Through real-time monitoring, companies will be given visibility into the “black box” to see exactly how their AI and ML models operate. In other words, explainability will enable data scientists and engineers to know what to look for (a.k.a. transparency) so they can make the right decisions (a.k.a. insight) to improve their models and reduce potential risks (a.k.a. building trust).”

What would make you trust a robot?

  • “AAA’s annual automated vehicle survey from March 2019 found that 71 percent of people are “afraid” to ride in fully self-driving vehicles
  • Scientists are trying to get people to trust A.I robots more. How? By having them explain themselves.
  • “In the past, robotics wasn’t closely tied to psychology. This is changing,” says Dr. Hongjing Lu, a professor in the department of psychology and statistics at UCLA…’Trust is a central component of humans and robots working together.'”

XAI—Explainable artificial intelligence

  • “For many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners
  • The purpose of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations.
  • Accuracy versus interpretability – Interpretability needs to consider tradeoffs involving accuracy and fidelity and to strike a balance between accuracy, interpretability, and tractability.”

Risk Managers Grapple With Potential Downsides of AI

  • “A blind spot for risk managers in financial services is itself becoming a risk: Few say they have the know-how to properly analyze the potential downsides of artificial intelligence.
  • 11%percent of risk managers in banking, capital markets and insurance say they are fully capable of assessing AI-related risks, according to a survey of 683 risk managers in nine countries released this week by Accenture.”

What Does The Future Hold For AI? Five Predictions

  1. “AI Will Move Beyond Deep Learning
  1. Intelligence Will Be Pushed Out To The Edge
  1. Algorithms Will Learn To Reason Causally
  1. Explainability Will Become A Hard Requirement
  1. AI Will Progress From Perception To Action
    • Products that aren’t explainable—with the exception of those that solve trivial, inconsequential problems—will not go very far.
    • Another trend that is going to make itself known in the very near future is that of applications moving beyond making mere recommendations to actually automating actions as well.”

    Credit: Getty Images