Microsites

What is Artificial Intelligence?

A recent Financial Conduct Authority (FCA) discussion paper, DP22/4: Artificial Intelligence, offered the following definition of Artificial Intelligence (AI):

‘It is generally accepted that AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence. Machine learning is a sub-branch of AI.

AI, a branch of computer science, is complex and evolving in terms of its precise definition. It is broadly seen as part of a spectrum of computational and mathematical methodologies that include innovative data analytics and data modelling techniques.’

Many PIMFA members are already using AI systems and tools in their everyday operations, and it is likely that the adoption of AI in financial services will increase rapidly over the next few years.

PIMFA firms already use AI tools for several purposes. For example, they analyse large volumes of data quickly, easily, and accurately, which enables their employees to spend more time working with and for their clients.

There are concerns that as AI becomes more advanced, it could introduce new risks, for example:

system develops biases in its decision making

leading firms to make bad decisions

rESULTING IN Poor outcomes for their clients

This is why it is essential that firms deploying AI systems have a suitably robust control framework around their AI components to keep a careful check on what they are doing.

As with any innovation, AI has the potential to make fundamental and far ranging improvements in how firms can serve their clients. However, we must ensure it is continually monitored and checked regularly to manage the risk and maximise the benefit.

CLICK IMAGE TO ENLARGE

A number of government departments are asking regulators such as the Financial Conduct Authority (FCA), Bank of England (BoE), Information Commissioners Office (ICO) and Competition and Markets Authority (CMA) to publish an update on their strategic approach to AI and the steps they are taking according to the White Paper. The Secretary of State is asking for this update by 30 April 2024.

On 13 March 2024, the EU Parliament approved the EU Artificial Intelligence Act. The EU AI Act sets out a comprehensive legal framework governing AI, establishing EU-wide rules on data quality, transparency, human oversight and accountability. It features some challenging requirements, has a broad extraterritorial effect and potentially huge fines for non-compliance.

CLICK IMAGE TO ENLARGE
CLICK IMAGE TO ENLARGE

Click to expand.

Click to expand.

PIMFA WealthTech tech spirit Report

PIMFA WealthTech recently conducted a tech sprint focusing on Artificial Intelligence.

The question posed was “How can wealth management and financial advice firms leverage AI to enhance operational efficiency by optimizing end-to-end processing across front, middle, and back-office functions?”

Read the tech sprint’s findings here

Bank of England: AI Consortium (inaugural meeting)

The Bank of England (BoE) has published the minutes of the first AI Consortium (AIC) which provides a platform for public-private engagement on AI.

Challenges and risks were discussed, for example:

  • The growing reliance on third-party providers
  • How widespread use of similar AI models could amplify systemic vulnerabilities,
  • Risks of contagion
  • The potential for gen AI to introduce misleading information onto financial markets
  • The risk of unfairness
  • The threat of AI-driven fraud and cyberattacks

Noting that BoE and the FCA’s pragmatic yet flexible approach to regulation to date, the AIC stated the need to coordinate across other regulators, jurisdictions and sectors.

Read the minutes here.

FCA Speech – Harnessing AI and technology

The FCA has published a speech by their Chief Data, Information and Intelligence Officer, Jessica Rusu.

The speech focused on harnessing AI and technology to deliver the FCA’s 2025 strategic priorities and noted initiatives such as the Supercharged Sandbox (a collaboration between the FCA and Nvidia).

This will commence in October 2025 with complementary AI Live Testing offering open for applications the w/c 7 July 2025.

With regards to authorisations and supervision, the FCA advised of:

  • Testing large language models to analyse text and deliver efficiencies
  • The use of predictive AI to assist supervisors
  • Using conversational AI bots to redirect consumer queries to relevant agencies such as the Financial Ombudsman Service (FOS).

Read the speech here.

FCA: Supercharging AI Innovation: FCA Partners with Nvidia

At London Tech Week 2025, Jessica Rusu, the FCA’s Chief Data, Information and Intelligence Officer, unveiled major steps the regulator is taking to support safe, responsible AI in financial services, including a new collaboration with Nvidia.

FCA AI Lab launched in January, featuring four key zones:

  • AI Sprint – collaborative events shaping outcome-based AI regulation
  • AI Input Zone – identifying transformative AI use cases
  • AI Spotlight – a live repository of real-world AI applications
  • Supercharged Sandbox – an upgraded space for early-stage AI experimentation. Applications are now open for firms to test their proof-of-concept AI solutions in the Supercharged Sandbox from October 2025.

Rusu’s speech also mentioned the new partnership with Nvidia that will bring advanced AI tools and GPU computing power to the Supercharged Sandbox, accelerating development for start-ups and innovators. The AI Live Testing allows firms to work with the FCA on real-time AI model testing, creating a shared understanding of responsible AI use.

The FCA is showing that regulation can drive innovation with the UK positioned as a global hub for FinTech and AI leadership.

Read the full speech here.

Almost there...

Complete the quick form below to download the Membership Brochure