Skip to main content

Key measures for mitigating AI risks in financial services.

Artificial Intelligence (AI) is rapidly transforming various sectors, bringing significant advantages and equally substantial challenges. As AI becomes more embedded in daily operations, the question of liability when things go wrong becomes increasingly critical. This article explores the complexities of AI liability and highlights the importance of a sector-specific approach to regulation, especially in the UK.

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. These principles were outlined in the AI Regulation White Paper, published in March 2023, and the pro-innovation response, published in February 2024. Regulators will implement this framework within their sectors by applying existing laws and issuing supplementary regulatory guidance.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators. Although the framework will not be codified into law for now, the Government anticipates the need for targeted legislative interventions in the future to address gaps in the current regulatory framework, particularly concerning General Purpose AI and key players involved in its development.

Despite their crucial role, neither the implementation of the principles by regulators, nor the necessity for them to collaborate, will be legally binding. The Government anticipates the need to introduce a legal duty on regulators to give due consideration to the framework’s principles. The decision to do so and the timing will be influenced by the regulators’ strategic AI plans and a planned review of regulatory powers and remits, to be conducted by the Central Function.

The Government expects individual regulators to take swift action to implement the AI regulatory framework in their respective domains. This expectation was communicated clearly through letters sent by the Department for Science, Innovation and Technology (DSIT) to leading regulators, requesting them to publish their strategic approach to AI regulation by 30th April 2024. These plans should include:

  • An outline of measures to align their AI plans with the framework’s principles
  • An analysis of AI-related risks within their regulated sectors
  • An explanation of their existing capacity to manage AI-related risks
  • A plan of activities for the next 12 months, including additional AI guidance

The publication of these plans will provide the industry with valuable insights into regulators’ strategic direction and forthcoming guidance and initiatives.

On 22 April 2024, the Financial Conduct Authority (FCA) and the Bank of England (including the Prudential Regulation Authority (PRA), together the Bank) published updates on their approach to artificial intelligence (AI). Firms should now expect, where they are using AI, that they will need to be able to explain their use of AI to their regulators. This will involve explaining how risks associated with the deployment of AI have been identified, assessed, and managed. The acid test is: if the regulator asks, do you have a convincing narrative about your approach to managing the risks associated with AI?

Preparing Your Business for AI Regulation

  1. Understand and Explain AI Usage: Be prepared to explain how AI is being used at all levels of your business, including suppliers and outsourced service providers.
  2. Compliance with Legal and Regulatory Obligations: Understand how your legal and regulatory obligations interact with any existing or proposed use of AI in your business.
  3. Data Protection: Ensure that client and commercial data is protected. Understand how your staff are using AI and what systems and data their AI tools can access.
  4. Robust Governance Arrangements: Implement and maintain robust governance arrangements, systems, and controls to discharge your legal and regulatory obligations in connection with AI. This might involve, for example, the implementation of an “AI policy.”

Financial services providers are increasingly turning to AI to enhance operations and customer experiences. However, the adoption of AI in this sector also brings specific liability challenges:

  1. Economic Loss: AI technologies are used in various financial services, from chatbots to investment analysis. Algorithmic errors, insufficient or inaccurate data, and lack of proper training can lead to significant financial losses for both the service providers and their customers.
  2. Data Protection Claims: AI systems often process large amounts of personal and non-personal data. Compliance with data protection laws, such as the General Data Protection Regulation (GDPR) and the Data Protection Act 2018, is crucial to avoid regulatory fines and customer claims.
  3. Security Breaches and Data Loss: The reliance on third-party frameworks and open-source code in AI development can create security vulnerabilities. Inadequate security measures can lead to data breaches, resulting in customer complaints and potential compensation claims.
  4. Intellectual Property Rights (IPR) Ownership and Infringement: Disputes over the ownership of AI-generated IP and claims of IPR infringement are common. Determining liability can be challenging, especially when AI systems operate autonomously and without clear human direction.
  5. Bias and Discrimination: AI systems can produce biased outcomes due to insufficient or low-quality data. These biases can lead to discrimination, customer complaints, and legal challenges, particularly if certain groups are unfairly disadvantaged by AI decisions.

Financial services businesses can take several measures to mitigate AI-related risks:

  1. Explainable AI: Ensuring that AI decisions are traceable and understandable can help in pinpointing where processes went wrong and assigning liability.
  2. Data Protection: Conducting Data Protection Impact Assessments (DPIAs) ensures that AI systems comply with data protection laws, reducing the risk of data protection claims and fines.
  3. Security: Implementing robust security measures protects data from breaches and corruption, which can otherwise lead to significant liability.
  4. Human Intervention: Maintaining appropriate levels of human oversight in AI processes can help mitigate issues and ensure accountability.
  5. Audit: Regularly auditing AI systems can identify potential risks and ensure compliance with legal and regulatory standards.

As AI technology continues to evolve, so too must the legal frameworks that govern it. In the UK, while specific laws exist to address certain aspects of AI, a more comprehensive, sector-focused approach is necessary. By understanding and addressing the complexities of AI liability, businesses can better manage risks and ensure that they are prepared to handle any issues that arise from AI-related errors or harm.

For further insights on how VENDOR iQ can support your operational resilience needs, visit our website or contact us directly at info@vendoriq.co.uk.

 

The content of this article is intended for informational purposes only and should not be considered as legal advice. VENDOR iQ makes no representations as to the accuracy or completeness of any information contained herein. For specific legal guidance related to AI liability, please consult with a qualified legal professional.

Complimentary managed trial of VENDOR iQ

VENDOR iQ Weekly
VENDOR iQ by Graphene

Related Posts

OFFICE ADDRESS: John Smith Business Park, Begg Road, Kirkcaldy, Scotland, KY2 6HD

EMAIL: info@vendoriq.co,uk

PHONE: 0800 538 5405