Will State Farm’s AI discrimination suit break the regulatory dam?
Analysis
This sample analysis demonstrates how we break down regulatory signals with commercial impact. We explain why a maturing lawsuit against State Farm could accelerate enforceable AI rules, how state regulators are positioning themselves, and what insurers should watch. It shows how we connect legal developments to operational, compliance and reputational risk for carriers.
Subscribe to get comprehensive access to our exclusive analysis in the full platform.
State regulators have largely avoided enforceable AI regulations, but bad news could change that.
When State Farm was sued in Illinois three years ago for using AI to discriminate against Black policyholders, the threat of AI to the insurance industry had attracted relatively little attention.
With artificial intelligence growing faster than most humans can follow, many executives were still evaluating how – and whether – to use the technology. Insurance regulators, meanwhile, were only beginning to consider how to oversee it.
To some extent, that calm continues today. Regulators have issued advisories, and AI has become a topic of discussion at most conferences, but enforceable rules specifically targeting the emerging tech are few and far between.
But all of that could soon change, and fast. Sources said all that is needed for regulators to begin cracking down is a prominent data point suggesting wrongdoing – or simply carelessness – within the industry that could catalyze regulatory interest, not to mention drawing public attention.
“I think all that is going to accelerate more regulatory activity is something ending up on the front page of the New York Times or Wall Street Journal,” said Michael Byrne, a partner at law firm McDermott Will & Schulte.
“Somebody gets sued or misuses the technology, and the whole industry will be painted with the same broad brush and then the overregulation will follow.
The State Farm Example
The lawsuit filed against State Farm in December 2022 in Illinois state court alleged the insurer violated the Fair Housing Act by treating Black and white homeowner insurance policyholders differently.
The plaintiffs said they uncovered that discrimination via a large-scale survey of policyholders across the Midwest that they said revealed “marked disparities” between State Farm’s handling of claims by homeowners of different races.
Specifically, they alleged the carrier’s use of automated claims processing methods and machine-learning algorithms were behind the disparities.
The AI allegedly used a number of signals as a proxy for race when denying claims, including voice, geolocation data, social media presences, browser search history and historical housing and claims data, according to the lawsuit.
But the case is notable not because it is unique, however. State Farm itself has been sued more than once over alleged AI-driven discrimination, and health insurers including Cigna, Humana and UnitedHealth have also been criticized or sued over their alleged AI-driven claims denials.
In the P&C world, the lawsuit against State Farm could end up being one of the first to make a big splash. The case is proceeding through discovery, meaning the case is mature and is coming close to a trial.
Should a trial take place, the carrier will be vulnerable to the whims of a jury – which sources said could potentially reach a verdict that attracts the kind of national attention that could propel greater scrutiny of the P&C industry and its use of AI.
The state of play in the states
As it turns out, regulating AI is a somewhat messy business.
With multi-billion-dollar data centers being proposed across the country to support AI growth, the sector is lifting the entire US economy and continually pushing the stock market to all-time highs.
Last week, for instance, chip maker and OpenAI partner Nvidia became the world’s first-ever $5tn company by riding that wave.
In terms of regulation, the growth has attracted the attention of the federal government in particular.
The Trump administration has set “global dominance in artificial intelligence” as a key goal, encouraging hundreds of billions of dollars in investment in the space. Congress meanwhile has toyed with the idea of preempting state regulations on AI.
All of that only complicates the path for state regulators, according to sources who said state insurance regulators especially may not want to get in the way, out of fear that President Donald Trump and Congress might lay conflicting ground rules.
As a result, most of the state-level action around regulating AI has come in the form of advisory bulletins.
The National Association of Insurance Commissioners (NAIC) adopted a bulletin in late 2023 that generally reminds insurers to follow existing laws when using AI, including anti-discrimination laws and fair competition laws.
As of August, 24 states had adopted the 2023 bulletin, which broadly warns insurers that their adoption of AI systems must comply with applicable insurance laws and regulations addressing things like unfair trade and discrimination.

More robust laws have been resisted by the industry, at least at the NAIC level.
This past summer, the NAIC Big Data and Artificial Intelligence Working Group convened to discuss a potential model law governing AI use, building on a nationwide request for information seeking input from stakeholders on the potential implementation of such a law.
Many stakeholders expressed concerns that the law would be premature or even unnecessary, however.
The National Association of Mutual Insurance Commissioners, for instance, encouraged the NAIC to view AI as a “tool” that is already subject to a “robust insurance statutory framework” that can be applied once controversies arise.
The American Property Casualty Insurance Association said that consumers stand to benefit from the cost efficiencies of AI use by P&C insurers, including through rapid claims settlement and better risk assessment.
Insurance technology firm Verisk also said in comments this year that a law would be redundant.
But some states have forged ahead with stronger consumer protections in the absence of that high-level regulator consensus.
Colorado stands out in that realm and was frequently cited as one of the leading states in terms of proactively regulating insurer use of AI.
Its regulations require companies to keep documented risk management procedures and inventories detailing their use of external consumer data – which includes social media habits, purchasing habits, credit scores and other records.
Notably, Colorado’s 2023 regulation specifically addresses life insurer use of the technologies, though the Colorado Department of Insurance has suggested similar rules could be applied to P&C lines as well.
New York’s Department of Financial Services adopted a similar regulation known as a circular letter in 2024 that instructs insurers to show that their use of external consumer data is not unfairly discriminatory and is supported by generally acceptable actuarial practices.
Insurers are also required to test whether their AI systems unfairly impact a protected class of people.
“The principal concerns that regulators have are that there’s a risk that the use of artificial intelligence – or different kinds of non-traditional data sources – in connection with insurance practices risks violating state anti-discrimination law,” said Stephanie Dobecki, a partner at law firm Sidley who works on financial regulatory issues.
“That's one of the underlying concerns that we see across all of the frameworks,” she continued.
Kathleen Birrane, a partner at law firm DLA Piper who was previously the insurance commissioner of Maryland, added that regulating AI use among insurers is complex, and requires a thorough understanding of models and actuarial processes.
Understanding that data can be hard both for insurers and their regulators, making action even more difficult.
Birrane also noted that insurance regulation is often a collaborative process among state insurance commissioners, and the complexity of the challenge is daunting.
“You don’t have a real consensus on all these points among regulators at this time,” she said.
Greater use of AI means plaintiffs’ bar interest
AI has many applications for insurers, and sources warned that each could present potential legal risks for carriers.
Claims professionals have expressed interest in AI to help streamline their handling of relatively simple claims, for instance, though many top-level claims professionals have told this publication that they are wary of allowing automated denials. Loss adjusters have also seized on the technology to triage claims after major storms.
Consulting and modelling firms like Deloitte, McKinsey, or Verisk have meanwhile identified AI as a potentially potent tool, since it can be used to quickly contextualize data and process information faster than before.
Most major carriers have internal AI guidelines and best practices that seek to steer companies around violating anti-discrimination laws and anti-competitive laws.
Meghan Dalton, a partner at the law firm Clyde & Co, said that while there have been relatively few lawsuits arising out of insurer use of AI, increasing privacy concerns likely mean there is momentum around greater scrutiny of insurer AI practices.
“As the use of AI rises, concerns around bias and discrimination will increasingly become a focus for the plaintiffs' bar, especially as more companies integrate AI into their operations,” Dalton said.
“The use of AI in the claims handling process will be particularly scrutinized and will need to be carefully implemented to avoid any bias.“
By Clark Mindock
November 06, 2025
Request your free trial today to unlock our complete intelligence platform.
Twelve Securis launches insurance credit fund focusing on RT1 notes
Read More
Industry and financial markets factors will support ILS growth in 2026
Read More