“A blind spot for risk managers in financial services is itself becoming a risk: Few say they have the know-how to properly analyze the potential downsides of artificial intelligence.
11%percent of risk managers in banking, capital markets and insurance say they are fully capable of assessing AI-related risks, according to a survey of 683 risk managers in nine countries released this week by Accenture.”
As a proponent of AI, this is the type of article that I do not enjoy sharing, because it will scare some people away. However, as an industry professional, I believe it’s critical to understand the benefits and detriments of a new technology,
That said, this article is a “must read” for anyone thinking about entering the AI/ML space. It’s important to understand there are inherent risks of being on the “bleeding edge.”
I’ve tried to share a number of key points from the article, but I encourage you to read the post in its entirety.
“Model risk generally refers to the potential for adverse consequences resulting from actions taken or decisions made based on incorrect or misused models or model output
The SEC disciplined a quantitative investment adviser where an error in the computer code of the quantitative investment model eliminated one of the risk controls in the model, and where that error was concealed from advisory clients
A robo-adviser advertised that its algorithms would monitor for wash sales but failed to accurately do so in 31 percent of the accounts so enrolled, the SEC found that the adviser had made false statements to its clients.
Mortgage lenders have been accused of incorrectly denying loan modifications due to computer errors
Banks have suffered anti-money laundering compliance failures due to coding errors
As banks, asset managers and other financial services firms begin to deploy artificial intelligence or machine learning—whether in credit risk scoring, fraud detection, robo-advisory services, algorithmic trading, insurance underwriting or other areas—the potential model risks and related consequences increase.
The FDIC and other regulators, financial service firms have generally developed tools to identify, measure and manage those model risks. But that guidance predates the AI renaissance
But in an AI world, when models work by identifying patterns in large data sets and making decisions based on those patterns, replication of the model’s output (let alone reviewing performance across a range of inputs) becomes far more difficult.
How will a court determine (1) whether there were any defects in the model design, input or output; (2) whether any defect caused any adverse decision; (3) which party—among the model developer (or licensor), model user (or licensee), or the financial institution’s customer—assumed the risk of the error or defect; and (4) the amount of any damages?
Other instances, particularly in the context of AI models, defects may be caused by a misinterpretation of underlying data, or reliance on coincidental correlations without causal connection, which may be much more difficult to detect.
In the context of AI models, though, which may use machine learning to detect patterns in millions of data points (e.g., credit application data, or asset management decisions), simply re-running the model with the same inputs may result in different outputs based on different machine learnings.
Model developer versus model user. The liability as between a model developer and a model user is typically governed by the terms of an agreement, including representations, warranties and indemnification provisions. Some such agreements may be “as is” agreements, where warranty or indemnification obligations are disclaimed by the developer. In other instances, the model user may negotiate that the developer retains liability for its negligence or gross negligence. “