The FCA sets out approach to the opportunities and risks of Artificial Intelligence for financial services

In this blog, Senior Manager of the IT and Cyber Risk Team, Will Finn discusses the Financial Conduct Authority’s (FCA) speech on Artificial Intelligence and how this will impact the financial services sector.

Artificial Intelligence (AI) is predicted to transform every area of our lives and our work – and the financial services sector is no exception.

AI’s influence on the sector has grown so much that, earlier this month, the Chief Executive of the FCA gave a speech which sets out the regulator’s approach to AI. Nikhil Rathi outlined the main opportunities and risks that AI poses to banks, and explained how existing regulations like the Senior Managers & Certification Regime and Consumer Duty should guide firms’ use of the technology.

The opportunities of AI

Mr Rathi’s speech to The Economist revealed that the FCA aims to help firms take advantage of the opportunities AI can offer. These include:

  • Improvements in productivity, including as a conversational tool for customer support.
  • AI has the potential to help financial institutions give better advice to all customers and investors, not just those who can afford specialist advice.
  • Generative AI and synthetic data could help to “improve financial models and cut crime”, while AI tools can help to tackle fraud and money laundering “more quickly and accurately and at scale.”
  • AI can help to “hyper-personalise” financial products and services to customers – for example, by providing them with insurance products which more precisely meet their needs.

The risks of AI

However, Mr Rathi spoke of the importance of the regulator putting “the right guardrails in place” to mitigate the risks of AI for the financial services sector.

A key risk is that misinformation fuelled by social media can impact the prices across global markets – for example, an AI-generated image of the Pentagon under attack “jolted global financial markets” until it was revealed to be a hoax.

Mr Rathi also warned that AI could make cyber fraud, cyber attacks and identity fraud more sophisticated and effective. “This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time,” he said.

Focus on Big Tech

The speech demonstrated that the regulator is particularly focused on the growing role Big Tech firms are playing in the financial sector. Mr Rathi acknowledged that there are opportunities for financial services firms to partner with Big Tech, with the objective of “increasing competition for customers and stimulating innovation”.

But he also said the regulator is looking at the risks that Big Tech companies (like Google and Meta) may pose to the sector. In particular:

  • Reliance on a small number of Big Tech firms could increase concentration risk and affect operational resilience in “payments, retail services and financial infrastructure”.
  • There is a “data-sharing information asymmetry” between Big Tech firms and financial services firms, because Big Tech firms are gatekeepers of data.
  • Big Tech firms could manipulate consumer biases and have access to “unique and comprehensive data sets such as browsing data, biometrics and social media”.

The FCA has begun to assess whether Big Tech firms could “introduce significant risks to market functioning”, and firms should prepare for regulation which could follow in the future. In the meantime, financial services firms should build this risk into their operational resilience planning. While firms using third parties such as cloud service providers must be clear about who is accountable if anything goes wrong in relation to these services, causing harm to customers or the viability of business.

A regulatory framework for AI

The speech hinted at further regulation of AI in the financial sector, noting that the Prime Minister recently announced the UK’s intention to become “the home of global AI safety regulation”. However, it also clarified that there is already relevant regulation in place to which firms must adhere.

The FCA requires firms to take an “outcomes-based approach” to protect customers and encourage innovation. The most relevant regulations include:

The Consumer Duty, which came into force on 31 July 2023. It requires firms to design products and services that aim to secure good consumer outcomes and demonstrate how all parts of their supply chain deliver these outcomes. This should also apply to firms’ use of AI in those products and services.

The Senior Managers & Certification Regime (SMCR) gives “a clear framework to respond to innovations in AI,” according to Mr Rathi. When AI tools are used to inform or make decisions, it leads to uncertainty over who is accountable for the impact of these decisions. But the SMCR makes clear that senior managers have ultimate accountability for all the firm’s activities. The speech notes suggestions in Parliament for a bespoke SMCR-type regime for individuals managing AI systems, which will be “part of the future regulatory debate”.

It is important that firms closely follow the regulatory developments around AI and financial services in the coming months and years to remain compliant with any new laws. But the existing regulations show that there is substantial work and planning that firms should already be undertaking.

Contact fscom today to discuss your firm’s preparedness to mitigate the risks of AI, and to exploit the opportunities it brings. 

 

This post contains a general summary of advice and is not a complete or definitive statement of the law. Specific advice should be obtained where appropriate.

Related Posts

CASS Audit

TISA CASS Compliance Survey

Earlier this year, TISA launched a CASS compliance survey in association with fscom, aiming to gather insights on key areas of interest related to CASS

Read More