
The global P&C market is now entering an age when potential AI liability loss triggers and exposures are expanding in number and severity across a growing number of business lines.
In some – so far – limited cases, claims are beginning to materialise amid a wave of AI-related litigation and an expanding range of loss triggers.
The wider exposures run through classes of business including commercial general liability, cyber, tech E&O, D&O and PI.
This newly emerging risk may soon present some profound questions for carriers around where losses could crystallise and how soon, as well as frequency, severity and the way existing wordings may need revision.
Questions around identifying and tracking exposure to silent AI – whereby insurers are exposed to a raft of AI-related claims across different business lines because wordings do not explicitly include or exclude AI liability – will likely be the subject of industry debates in the near future about potential systemic risks.
As corporations across multiple sectors move from experimentation with AI solutions to live implementation for different use cases, this adoption curve will drive both exposures and London market discussions.
AI tools under the broad category of generative AI are being increasingly deployed along with large language models (LLMs). With deep-learning capabilities, LLMs are trained on vast datasets of text and code, enabling them to perform language-based tasks such as text generation, translation and question answering.
Policies across various relevant lines were not designed to provide generative-AI-related coverage, or at least not for the scale at which these AI liability risks could accumulate in the coming years.
Karthik Ramakrishnan, CEO of Armilla, an MGA focussed solely on AI insurance, explained: “Do existing tech E&O, professional liability, general liability and cyber insurance policies adequately cover AI? The answer is ‘maybe’.
“That is not a great answer for the insurance industry.”
One CEO of another MGA added: “The reality is, many syndicates don't know how exposed they are to AI risks, because they're not asking those questions and they have no data. Some are, at least, looking to understand the potential additional frequency they might have to endure due to AI risks in cyber or general liability.”
The Lloyd’s Market Association (LMA) has recently started to look more deeply at potential AI loss scenarios for various insurance products and the use of model cyber clauses to insure, limit or exclude the attendant risks.
David Powell, head of technical underwriting at the LMA, told Insurance Insider that the trade body has conducted a market survey to gather perspectives from underwriters and other groups in the market.
A central aim of the survey is to establish what syndicates know – and don’t know – about the current use of AI systems by insureds.
For now, there’s no central public repository or report detailing the quantum of AI liability claims coming through. However, as one underwriting source noted: “They are literally just starting to be observed and being drawn to the attention of CUOs and CEOs; they’re just not yet being talked about openly.”
The findings of the survey will be released in the coming weeks.
Definitions, loss triggers, silent exposures
Freddie Scarratt, global deputy head of InsurTech at Gallagher Re, defined AI liability insurance as a liability arising from an output of an AI system that affects third parties, “such as copyright infringement, personal injury and privacy injuries by unintentional disclosure of data, and errors and omissions arising from automated decisions and workflows”.
This is separate from performance guarantees of AI, which are policies more commonly sold to vendors (though also sometimes to corporations using AI), he explained.
Scarratt said AI liability has reached a similar stage to cyber insurance 25 years ago.
“We are seeing risks associated with AI being addressed on a case-by-case basis within existing insurance policies, whether that’s cyber insurance or E&O policies.
“The problem is: those are silent coverages. They're not designed to cover it.”
He added that start-ups and traditional players entering the AI liability space and designing products to fill that gap.
Since Munich Re’s pioneering venture into this space with its aiSure proposition in 2018, there has been an influx of these start-ups such as Armilla, Relm Insurance, CoverYourAI, AiShelter and Testudo, as detailed in a WTW report.
Testudo is one of the latest AI specialist MGAs to appear and has built its own proprietary database of AI litigation.
If there’s uncertainty over silent AI exposures, there is at least certainty that demand is increasing for these specialty products, as AI solutions start to occupy more critical positions in businesses across multiple sectors.
In tandem with the rise of product innovation, the range of potential loss triggers has expanded, opening the door for claims to arise through various routes such as intellectual property (IP) infringement, LLM-powered chatbots giving false information, corporate espionage in which trade secrets are stolen, AI model drift and potential discrimination in AI decision-making.
These example categories are not exhaustive and each has sub-categories of types of potential claims.
The movement among corporations to live deployment of LLMs, almost as a BAU process, is driving a lot of these potential exposures. It has been a haphazard, sometimes dangerous journey for corporations, but the technical faults or failure of models is just one factor of a multifaceted story around AI liability.
Another important element is the rise of AI-centred litigation.
An ominous litigation landscape
In several developed jurisdictions, the legal frameworks and relevant regulations governing AI’s usage are a loosely connected patchwork of existing IP laws, data-protection regulations and commercial law, leaving a knotty, under-developed landscape where major lawsuits will start to fill in the gaps with case law.
Emerging themes among AI-related lawsuits that have implications for insurance include unresolved issues around copyright infringement, accountability for technical errors, chatbots interacting with customers in dangerous ways and latterly, corporate espionage.
This publication has picked only a few cases here involving AI models, which are either illustrative of the risks or could impact insurers directly. They are among hundreds of others.
Two important aspects to note of the model risk lawsuit examples here are that exclusions would typically apply in cases involving suicide, while in the Air Canada case, the loss was not large enough to trigger a meaningful claim. These lawsuits are however illustrative of how technical and oversight issues with chatbots can cause far-reaching consequences.
In terms of the copyright cases and what they demonstrate, clarity is needed from new regulatory frameworks and case law over how IP ownership should be overseen and controlled.
Claire Davey, head of product innovation and emerging risk at Relm Insurance, said: “We're tracking outcomes of the IP litigation in terms of who owns the IP when content is created through generative AI. Is the person who has inputted the prompts the owner? Is it the gen AI model itself? Can you ever attribute IP from something that's been created through gen AI?
She added that there is “a lot of concern”, particularly from those that insure the larger LLM providers, about IP infringement and the data used to train models.
The general expansion of silent AI exposures, whether through litigation or other triggers, has prompted a debate around how the AI market is mirroring the trajectory of the cyber market – up until the fractious debate about the implementation of exclusions of cyber war and a need for standalone cyber war products.
The carve-out question: will AI have its own cyber war moment?
Some experts believe the market for AI liability may need its own “cyber war moment” in future – which is to say one where certain AI liability exposures are carved out for standalone policies.
Other sources disagreed, believing it will most likely be absorbed into cyber to a large extent, with AI coverage such as for data privacy violations remaining in other classes including tech E&O, though with specialist resources put around it longer-term.
When AI tools are deployed to exploit software vulnerabilities, cyber insurance seems likely to be the appropriate product.
Relm Insurance’s Davey said the insurance industry will address any need for exclusions in AI liability faster than it did with cyber.
“What will drive the exclusions is probably a regulatory standpoint, or Lloyd's, for instance, requiring them. Cyber is softening, so insurers probably won't apply those exclusions unless they absolutely have to.”
Armilla’s Ramakrishnan added: “The corollaries from the shift in the cyber insurance industry, and the combination and consequence of factors that we see with AI liability insurance, are almost identical.”
Sources pointed to three possible routes for AI coverage. One is that it remains within existing products, with this element of the policy assessed with greater scrutiny and expertise.
The second is that it becomes an extension product around existing policies as exposures and loss data emerge.
The third is that it follows a pattern similar to cyber, in that certain AI risks are excluded, for example, from general liability and switched to standalone products.
In any of these scenarios, it seems inevitable that the industry needs an expansion of specific expertise in AI model risk assessment. At this stage it is unclear whether this will be developed internally or bought in from outside the industry.
Michael von Gablenz, head of Insure AI at Munich Re – which has been providing AI liability products for eight years – explained the importance of amassing technical expertise in understanding AI models and their probabilistic nature.
“We have built up a talent pool, and relationships with universities, which are very important recruiting grounds for AI talent. Some of the biggest advances in AI are made by former PhD students who just came from university. We also consider how we can make this talent pool available to our primary insurance clients.”
He explained that insuring model hallucinations involves qualitative technical assessments around the data science involved, but also quantitative assessments based on performance data of the model.
His team has also partnered with academic institutions including The University of Oxford to help quantify the risks of AI models. These papers could be seminal for the development of this market.
Giving a different view, Testudo CEO and co-founder George Lewin-Smith said: “From an insurability perspective, performance does not necessarily equal liability risk.”
He explained that the firm's database shows legal action can be launched against even the highest performing models.
Expertise, data and analysis needs
As Insurance Insider has explored previously, several fundamental questions need to be addressed for more established carriers to provide specific AI products.
Understanding AI tail risk, as well as modelling for AI model failures and loss triggers, will need to be refined. In addition, regulation may need to resolve ambiguities around accountability between corporate user and AI developer.
Naturally, given the pace at which the industry responds to emerging risks and the emergence of loss data, it will take time for these to be addressed, but the market might not have the luxury of time with AI risks.
First movers are gaining strategic and operational competitive advantages in refining their expertise and underwriting know-how around AI models, how they falter or fail in various circumstances, but also other elements such as how litigation triggers AI-related claims.
This kind of wisdom will be difficult to expedite, but for carriers in the near future, it will be increasingly vital to hold this technical knowledge in-house across a few business lines.
In a more immediate sense, the prevalence of litigation, and the rapid acceleration of AI model capabilities and their usage in a live environment, will both likely drive demand among insureds.
Like everything with AI, data will be vital. It may soon become essential for the London market at large to identify, tag, track and assess the data emerging, in order to understand exposures to silent AI, accumulation risk and the extent and pace of exclusions needed.