- “Financial regulators have yet to publish a comprehensive set of artificial intelligence (AI)-related rules, as most of the efforts have focused so far on guidelines and principles for financial institutions to make good use of AI. However, senior banking regulator officials warned on Friday (June 3) that existing regulation could also apply to AI.
- ‘If they don’t have the appropriate governance, risk management and controls for AI, they shouldn’t use AI,’ Palmer said in the same panel discussion.”
- “The emerging field of explainable AI (or XAI) can help banks navigate issues of transparency and trust, and provide greater clarity on their AI governance. XAI aims to make AI models more explainable, intuitive, and understandable to human users without sacrificing performance or prediction accuracy. Explainability is also becoming a more pressing concern for banking regulators who want to be assured that AI processes and outcomes are “reasonably understood” by bank employees.
- Specifically, we consider the following questions: 1) How should banks weigh the benefits of explainability against potential reductions in accuracy and performance? 2) What’s the most effective way for AI teams to prioritize efforts that enhance transparency across the model development pipeline? 3) Which models should be the biggest priority/focus of explainability? And 4) How should banks deploy their limited resources to ensure explainability across their model inventory?”
- “Automation and AI-based technology is playing an ever-increasing role at each stage of a customer service journey, from chatbots to customer data analysis.
- Yet delivering optimum customer service through the likes of AI comes with new and expanding security regulations
- Indeed, the likes of online chatbots empowered through automated or AI-based Q&As, have become the tool of choice for many financial organisations.
- The backbone of the new EU policy on AI stipulates different levels of risk ranging from unacceptable risk to minimal risk.
- According to the current definitions for each level of risk, none of the AI customer service features currently on the cloud such as chatbots, automated Interactive Voice Response technology or Caller Line Identification tools, qualify as unacceptable risks, meaning that all cloud contact centre operations can continue as normal.
- Other areas of AI integration that stand to face greater regulation are online dashboards. AI uses data from these dashboards to personalise the online experience – it can analyse the most popular functions of online banking and take users directly to frequently used tabs.”
- “’I am generally in favour of regulating a particular application rather than a technology’ in general, Yann LeCun, said in an interview.
- ‘We could possibly also develop robots that are somewhat intelligent and autonomous: machines that we give a task to but don’t explain how to do it,’ LeCun said.”
– “A group of US banking regulators—including the Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), and the Consumer Financial Protection Bureau (CFPB), among others—has issued a statement that they’re seeking public input on the rising usage of AI by financial institutions (FIs), Reuters reports.
– US banking regulators, including the Fed and Federal Deposit Insurance Corporation, are seeking information on how banks use AI.
– This provides an opportunity for FIs and the government to set clear expectations as the tech gains prevalence.”