What is Artificial Intelligence?
A recent Financial Conduct Authority (FCA) discussion paper, DP22/4: Artificial Intelligence, offered the following definition of Artificial Intelligence (AI):
‘It is generally accepted that AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence. Machine learning is a sub-branch of AI.
AI, a branch of computer science, is complex and evolving in terms of its precise definition. It is broadly seen as part of a spectrum of computational and mathematical methodologies that include innovative data analytics and data modelling techniques.’
- WHAT DO FIRMS NEED TO CONSIDER?
Many PIMFA members are already using AI systems and tools in their everyday operations, and it is likely that the adoption of AI in financial services will increase rapidly over the next few years.
PIMFA firms already use AI tools for several purposes. For example, they analyse large volumes of data quickly, easily, and accurately, which enables their employees to spend more time working with and for their clients.
There are concerns that as AI becomes more advanced, it could introduce new risks, for example:
system develops biases in its decision making
leading firms to make bad decisions
rESULTING IN Poor outcomes for their clients
This is why it is essential that firms deploying AI systems have a suitably robust control framework around their AI components to keep a careful check on what they are doing.
As with any innovation, AI has the potential to make fundamental and far ranging improvements in how firms can serve their clients. However, we must ensure it is continually monitored and checked regularly to manage the risk and maximise the benefit.
- What are regulators doing?
A number of government departments are asking regulators such as the Financial Conduct Authority (FCA), Bank of England (BoE), Information Commissioners Office (ICO) and Competition and Markets Authority (CMA) to publish an update on their strategic approach to AI and the steps they are taking according to the White Paper. The Secretary of State is asking for this update by 30 April 2024.
On 13 March 2024, the EU Parliament approved the EU Artificial Intelligence Act. The EU AI Act sets out a comprehensive legal framework governing AI, establishing EU-wide rules on data quality, transparency, human oversight and accountability. It features some challenging requirements, has a broad extraterritorial effect and potentially huge fines for non-compliance.
Maria Fritzsche
Senior Policy Adviser - Operational Policy, Regulation and Innovation Lead
Click to expand.
latest news
Don’t be Complacent About AI Compliance
Read here an article from the PIMFA Journal #33 by Vicky Pearce at B-Compliant about how to use AI safely and compliantly.
FCA Supercharged Sandbox: second cohort applications
The FCA has advised that the Supercharged Sandbox will open for a second cohort in 2026, offering firms another opportunity to test and develop AI use cases.
The FCA is particularly interested in applications that demonstrate the use of agentic AI, with selected firms to be involved in a collaborative programme of development, testing and engagement.
Applications will close on 1 June 2026, with firms notified of outcomes on 26 June 2026. The cohort launch is scheduled for 13 July 2026. Read more details here.
If firms have any queries regarding the second cohort, the FCA can be contacted directly here
Talent and Controls – Where AI Initiatives Succeed or Stall
Read here an article from PIMFA journal #33 by Edward Russell at Solve about how talent and controls aid successful AI adoption.
FCA Speech: Supporting Fintech in the Next Phase of Innovation
The FCA has published a speech by Jessica Rusu (Chief Data, Information and Intelligence Officer) which considers the role of AI in reshaping the financial services.
The regulator’s commitment to supporting innovation was also highlighted, noting the next phase of the regulator’s AI Lab: extending its partnership with NVIDIA, and scaling up the Supercharged Sandbox.
With reference to the recently published Open Finance roadmap, the FCA also announced that the Scale‑Up Unit is open for expressions of interest from solo‑regulated firms.
The second cohort for AI Live Testing was also announced.
Read the full speech here.
The Wealth and Asset Management Operating Model Can’t Keep Up
Read here an article from PIMFA journal #33 by Richard Doherty and Sumit Johri at Publicis Sapient about the traditional playbook built on manual processes, siloed business and technology functions, and relationship-driven models is no longer sufficient.
Regulators respond to Treasury Committee report on AI in Financial Services
The FCA has responded to the Treasury Committee’s report on AI in financial services by reiterating that existing regulatory frameworks already require firms to manage risks arising from AI use, including governance, data, operational resilience and consumer protection.
The FCA also pointed to its ongoing work to assess the longer term impact of AI on financial services through its review work and broader engagement with industry. Alongside this, the BoE has confirmed that it is carrying out scenario analysis and simulations to test how increased AI adoption could affect financial stability, including potential herding behaviour during periods of market stress.
If you are a PIMFA member and would like to join our discussions on this and AI, please contact Maria Fritzsche.
To read the full response, click here.
Leading Lights Forum Report 2025/26
AI: Evolution, Revolution or Devastation? Read the new Leading Lights Report
PIMFA