Managing AI risks in insurance: from data security to good governance

Updated as of: 14 October 2025

Insurance leads in AI adoption, yet a third of UK customers lack confidence in the technology. As regulations evolve and ethical concerns mount, how can insurers adopt effective AI governance and compliance measures?

Shutterstock.com/sommart sombutwanitkul

95% of insurance companies are already deploying AI – the highest percentage of companies within the financial services sector, according to a 2024 Bank of England survey. 

Generative AI (genAI) gives insurers powerful new tools for reasoning and decision-making, with the latest models analysing vast, unstructured data sets to identify underlying risks; offering highly personalised and empathic responses to customer queries and automating complex workflows. 

Insurers are already implementing AI across core business areas from fraud detection, underwriting and claims management to customer services and back-office functions such as finance, actuarial and IT.

While AI holds immense potential for insurers, it’s not without risks such as bias and discrimination. Consumer trust in the technology is low, with 34% of UK customers indicating that “nothing would give them confidence in insurers using AI,” the 2025 Guidewire European Insurance Consumer Survey found. This underscores the importance of transparency, fairness and education regarding how AI is deployed in insurance settings. 

Lexology PRO considers some of the unique challenges AI poses to the insurance industry and how companies can reap the benefits while minimising the risks.

What are the unique risks of AI in insurance? 

A fragmented regulatory landscape

Different jurisdictions vary substantially in their approach to AI regulation, which presents a compliance challenge for multinational organisations. Companies need to navigate the EU’s detailed, codified regime, while interpreting broader and more fragmented rules in the UK, US and beyond.

AI systems used in pricing, policy drafting and claims handling may fall within the high-risk category, triggering rigorous compliance obligations under EU AI Act 2024. In accordance with these requirements, insurers operating high-risk systems may need to comply with data governance requirements, adopt mechanisms to enable human supervision and tighten security protocols. 

Failure to comply with EU AI Act 2024 provisions related to high-risk models could result in fines of up to €35 million (US$40 million) or 7% of global annual turnover, whichever is greater. 

The UK has yet to enact AI-specific legislation, but laws such as UK General Data Protection Regulation 2016 have significant implications for AI models that process personal data – likely to include those deployed by insurance companies. 

Although the US government seems hesitant to impose comprehensive regulations on AI developers, certain states such as Colorado have enacted risk-based AI legislation. The Colorado AI Act aims to protect consumers from algorithmic discrimination within high-risk AI systems, including those deployed in financial services. Operators of systems that make or substantially influence "consequential decisions" must maintain information on their systems’ data, limitations and risk mitigation measures, as well as take “reasonable care” to prevent discrimination, among other measures.

Ethical concerns 

All AI systems carry risks associated with bias and discrimination. However, these risks are particularly significant when they could affect an individual’s access to financial services, including insurance. 

If companies fail to take adequate steps to curtail the risk of algorithmic discrimination, their models could either refuse to offer insurance to certain groups or impose prohibitively high premiums.

The risk is particularly great when insurance companies rely on generic, third-party genAI tools, or if there is an inadequate understanding of the AI’s decision-making process, contributing to the “black-box” effect.

Risks of processing personal data 

For AI to deliver on the personalisation benefits insurance companies are seeking, models will need to process vast amounts of sensitive or personal data, from health records to financial histories.

Established data protection laws, including GDPR and the growing number of US state consumer privacy laws, continue to apply in AI contexts. This presents a complex compliance challenge for insurance companies.

This challenge is exacerbated by the sheer volume of data processed by AI systems, in addition to issues around consent and data subject rights – it may not be straightforward to comply with a customer’s wishes to have their data corrected or deleted once it has been inputted into an AI system. 

Ultra-personalisation challenges underwriting principles

AI enables insurers to tailor premiums and offers with greater personalisation. However, if this reaches a stage where AI models facilitate near-perfect segmentation of risk, prohibitively high premiums may exclude high-risk individuals or businesses. Taken to the extreme, the risk is that coverage gaps result in the purpose, value, and attractiveness of insurance being undermined overall. 

Reaping the benefits of AI while minimising the risks

Embracing robust AI governance and risk management as well as establishing comprehensive data management and security systems will be essential for insurers to adopt AI in a responsible, ethical and compliant manner. 

Here are some measures companies should consider implementing. 

Robust data management practices

In the AI era, data is likely to become insurance companies’ greatest asset, while also being a substantial liability. For that reason, it’s essential to establish effective data management and security systems.

This includes utilising techniques like data anonymisation and pseudonymisation to reduce re-identification risks, robust access controls and having in place incident response plans to follow in the event of a data breach. 

Embed AI in enterprise risk and compliance frameworks

Effective AI governance is vital to ensure AI risks are appropriately mitigated and regulatory compliance is maintained.

Insurance companies should be able to evidence how they are deploying AI responsibly, with appropriate safeguards, oversight mechanisms, and accountability structures in place.

This will entail establishing board oversight over AI governance; maintaining clear audit trails of decisions made by or with the assistance of AI and carrying out regular reviews of internal policies to accommodate regulatory changes and/ or technological developments. 

Ensure adequate human oversight

This is particularly a concern for companies deploying AI systems categorised as “high-risk” under emerging AI laws, which are usually subject to stricter oversight and transparency requirements.

Insurers will need to ensure there is sufficient understanding of AI systems’ decision-making processes, maintain thorough documentation of these processes and establish accountability frameworks among software providers, users and insured parties.

Maintaining records of human input and reviewing decisions made by AI can be useful in assisting businesses with assessing and managing risk arising from AI use. Insurers will also need to ensure staff have appropriate training on how an AI system works and its limitations, so they can appropriately balance automation with oversight. 

Audit for bias and discrimination

Insurance companies face severe reputational damage, regulatory scrutiny and potential legal action if their AI systems discriminate against certain groups.

To prevent this, it will be essential for insurers to conduct regular audits of their end-to-end processes in which AI systems are involved, monitoring for potential instances of bias and ensuring transparency in AI decision-making to uphold ethical standards.

Companies should also ensure that the data used to train AI systems is fair, representative and relatively free from historical biases that could produce discriminatory outcomes. 

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.   

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool. 

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.