What is Artificial Intelligence?
A recent Financial Conduct Authority (FCA) discussion paper, DP22/4: Artificial Intelligence, offered the following definition of Artificial Intelligence (AI):
‘It is generally accepted that AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence. Machine learning is a sub-branch of AI.
AI, a branch of computer science, is complex and evolving in terms of its precise definition. It is broadly seen as part of a spectrum of computational and mathematical methodologies that include innovative data analytics and data modelling techniques.’
- WHAT DO FIRMS NEED TO CONSIDER?
Many PIMFA members are already using AI systems and tools in their everyday operations, and it is likely that the adoption of AI in financial services will increase rapidly over the next few years.
PIMFA firms already use AI tools for several purposes. For example, they analyse large volumes of data quickly, easily, and accurately, which enables their employees to spend more time working with and for their clients.
There are concerns that as AI becomes more advanced, it could introduce new risks, for example:
system develops biases in its decision making
leading firms to make bad decisions
rESULTING IN Poor outcomes for their clients
This is why it is essential that firms deploying AI systems have a suitably robust control framework around their AI components to keep a careful check on what they are doing.
As with any innovation, AI has the potential to make fundamental and far ranging improvements in how firms can serve their clients. However, we must ensure it is continually monitored and checked regularly to manage the risk and maximise the benefit.
- What are regulators doing?
A number of government departments are asking regulators such as the Financial Conduct Authority (FCA), Bank of England (BoE), Information Commissioners Office (ICO) and Competition and Markets Authority (CMA) to publish an update on their strategic approach to AI and the steps they are taking according to the White Paper. The Secretary of State is asking for this update by 30 April 2024.
On 13 March 2024, the EU Parliament approved the EU Artificial Intelligence Act. The EU AI Act sets out a comprehensive legal framework governing AI, establishing EU-wide rules on data quality, transparency, human oversight and accountability. It features some challenging requirements, has a broad extraterritorial effect and potentially huge fines for non-compliance.
latest news
An AI for an AI- Artificial Intelligence as a sword and a shield in the battle against fraud
Read this article by Alan Baker & Hannah Bohm-Duchen, Farrer & Co, in our lastest Journal.
FCA addresses AI’s Role in Financial Services
The Financial Conduct Authority (FCA) recognises that AI has the potential to transform decision-making and customer experiences in UK financial services for the better but also raises concerns about how it can be used safely and responsibly. The regulator notes that innovation depends not just on computing power and data, but crucially on confidence that AI can be used safely and responsibly in UK financial markets.
To build this confidence, the FCA is taking a collaborative approach with initiatives like AI Live Testing, balancing the need for regulatory clarity with avoiding stifling innovation in this rapidly evolving technology space.
To find out more about the FCA’s AI Live Testing initiative, please click here
Redefining Risk: Women, AI, and the democratisation of investing
Read this article by Zoe Morton, RSM, in our lastest Journal.
PIMFA WealthTech tech spirit Report
PIMFA WealthTech recently conducted a tech sprint focusing on Artificial Intelligence.
The question posed was “How can wealth management and financial advice firms leverage AI to enhance operational efficiency by optimizing end-to-end processing across front, middle, and back-office functions?”
Read the tech sprint’s findings here
Bank of England: AI Consortium (inaugural meeting)
The Bank of England (BoE) has published the minutes of the first AI Consortium (AIC) which provides a platform for public-private engagement on AI.
Challenges and risks were discussed, for example:
- The growing reliance on third-party providers
- How widespread use of similar AI models could amplify systemic vulnerabilities,
- Risks of contagion
- The potential for gen AI to introduce misleading information onto financial markets
- The risk of unfairness
- The threat of AI-driven fraud and cyberattacks
Noting that BoE and the FCA’s pragmatic yet flexible approach to regulation to date, the AIC stated the need to coordinate across other regulators, jurisdictions and sectors.
Read the minutes here.
FCA Speech – Harnessing AI and technology
The FCA has published a speech by their Chief Data, Information and Intelligence Officer, Jessica Rusu.
The speech focused on harnessing AI and technology to deliver the FCA’s 2025 strategic priorities and noted initiatives such as the Supercharged Sandbox (a collaboration between the FCA and Nvidia).
This will commence in October 2025 with complementary AI Live Testing offering open for applications the w/c 7 July 2025.
With regards to authorisations and supervision, the FCA advised of:
- Testing large language models to analyse text and deliver efficiencies
- The use of predictive AI to assist supervisors
- Using conversational AI bots to redirect consumer queries to relevant agencies such as the Financial Ombudsman Service (FOS).
Read the speech here.