AI DistributionCompliance

AI Is Already Distributing Insurance. The Regulations Haven't Caught Up.

Waniwani Team
·
AI Is Already Distributing Insurance. The Regulations Haven't Caught Up.

AI Is Already Distributing Insurance. The Regulations Haven't Caught Up.

Insurance is being sold through AI. Not in a lab. Not in a pilot. Right now, on platforms used by hundreds of millions of people.

Consumers are asking ChatGPT, Claude, and soon Gemini for home insurance estimates. They're getting prices, coverage details, and links to purchase, all generated by large language models that didn't exist three years ago.

And yet, no regulator in the world has published a definitive rulebook for this.

We're in a grey zone. Here's what it looks like, what we actually know today, and what's likely coming.

Why AI insurance distribution is a grey zone

The laws exist. They just weren't written for this.

Insurance distribution has been regulated for decades. In Europe, the Insurance Distribution Directive (IDD) governs who can sell insurance, what disclosures are required, and how products should be presented. In the US, state insurance commissioners enforce similar rules through a patchwork of state-level regulations. In the UK, the FCA's Consumer Duty demands good outcomes regardless of the channel.

These frameworks are technology-neutral, which sounds like a strength until you try to apply them to a conversation between a user and a large language model, mediated by a tool call to an API that returns a price estimate.

The problem isn't that regulations don't apply. It's that nobody knows exactly how they apply.

Three questions nobody has answered definitively

1. When does "information" become "advice"?

The IDD draws a clear line between providing information about an insurance product and recommending one. Cross that line, and you trigger a whole set of additional obligations: suitability assessments, documentation, liability.

When a human broker says "this policy covers flooding," that's information. When they say "this is the right policy for you," that's advice.

But what happens when an LLM says "based on what you've told me, this seems like a good fit"? The model wasn't instructed to say that. No one at the carrier approved it. The response was generated probabilistically by a system that is, by design, trying to be helpful.

The advice boundary, the single most important regulatory line in insurance distribution, becomes dangerously blurry the moment you hand the conversation to a model you don't fully control.

2. Who is the "distributor"?

Under the IDD, insurance distribution includes "work preparatory to the conclusion of contracts," "provision of information concerning insurance contracts," and even "price comparison." The definition is deliberately broad.

So when an LLM platform hosts a tool that returns insurance estimates, who is distributing? The carrier that built the tool? The platform that hosts it? The AI model that decides when and how to call it?

Today, the legal answer is: the licensed carrier bears ultimate responsibility. But this gets complicated fast. The carrier controls the tool's inputs and outputs, but not the conversation around it. The LLM platform controls the user experience, but may argue it's just providing infrastructure. And the model itself is a black box that can paraphrase, omit, or embellish anything it receives.

We're operating in an architecture where responsibility is distributed across entities, but regulation assumes a single accountable party.

3. How do you prove compliance when you can't see the conversation?

This is perhaps the most under-appreciated challenge. Modern AI distribution architectures (particularly those built on protocols like MCP (Model Context Protocol)) are designed so that the tool provider (the carrier) receives only the structured tool call, not the full conversation.

The carrier can't see what the user asked. Can't see how the LLM framed the response. Can't verify that mandatory disclosures were shown, that the AI-generated disclaimer was included, or that the model didn't add a recommendation it wasn't supposed to make.

You're expected to comply with regulations you can't observe yourself complying with.

We've seen this before. It was called Google.

Before AI chatbots started distributing insurance, another technology disrupted how consumers found and compared policies: search engines.

When Google became the dominant way people discovered insurance, it created a quiet regulatory crisis. A consumer typing "cheap home insurance" into Google would see a ranked list of results, some organic, some paid. Google wasn't selling insurance. It was ranking it. And ranking, it turns out, is a regulatory tripwire.

From search to comparison to intermediary

The trajectory was predictable in hindsight:

Phase 1: Search as discovery. In the early 2000s, Google was just a way to find an insurer's website. No regulatory issue. It was a phone book.

Phase 2: Comparison sites emerge. By the late 2000s, price comparison websites (PCWs) like Compare the Market, MoneySupermarket, and GoCompare had become dominant in the UK. They took user inputs, queried multiple insurers, and returned a ranked list of quotes. By 2017, aggregators accounted for more than half of all direct motor insurance sales in the UK.

Phase 3: Regulators classify comparison as distribution. The key moment: regulators decided that ranking insurance products by price is insurance distribution. The FCA's 2011 guidance and 2014 thematic review made clear that PCWs were insurance intermediaries, subject to licensing, conduct rules, disclosure obligations, and regulatory supervision. The IDD, adopted in 2016, explicitly included "price comparison" in its definition of insurance distribution. The Swiss Federal Administrative Court went further, ordering comparison platform Comparis to register as an insurance intermediary with FINMA.

Phase 4: Google tries to be the comparison site. Google Compare launched in the UK in 2012 and the US in 2015. Google had to get licensed to sell insurance in 26 US states. The FCA scrutinized it for self-preferencing, placing its own comparison tool at the top of search results, above the comparison sites that were themselves regulated intermediaries. Major carriers (State Farm, Geico, Allstate, Progressive) refused to participate, wary of being reduced to a line item in Google's price ranking. Google Compare shut down in March 2016 after launching in only four US states.

The lesson was clear: the moment a technology layer starts ranking, comparing, or selecting insurance products for consumers, regulators will eventually classify it as distribution and require licensing.

Why the Google precedent matters for AI distribution

AI chatbots are doing something far more powerful than a ranked list.

A comparison website shows you ten prices sorted low-to-high. You see all the options. You make the choice. The website is a passive intermediary; it presents, you decide.

An LLM converses. It asks about your situation. It contextualizes. It explains what's covered and what isn't. It can say "based on your apartment size and location, here's what you'd pay." It can omit options it deems irrelevant. It can frame one product more favorably than another, not because it was instructed to, but because that's what sounded most helpful in the conversational flow.

If regulators decided that a sorted table of prices constituted insurance distribution, then a personalized, contextual conversation that guides a consumer toward a specific product is distribution on steroids.

The comparison site precedent tells us exactly where this is heading. The only question is how fast regulators will get there, and whether carriers will be ready when they do.

The comparison site era took roughly a decade to play out: from unregulated novelty (early 2000s) to fully licensed intermediaries (IDD in 2016). AI distribution won't get that much runway. Regulators are watching. EIOPA is already publishing AI-specific guidance. The NAIC is drafting model laws on third-party AI vendors. The EU AI Act enforcement begins in months, not years.

The carriers that learned from the comparison site era (those that built compliance infrastructure early, that treated regulatory ambiguity as a reason to over-prepare rather than wait) are the ones that thrived. The same pattern is playing out now, just faster.

What we do know today about AI insurance compliance

Despite the grey zones, the regulatory landscape isn't a blank page. Several things are clear.

Existing insurance regulations apply to AI. All of them.

Every major regulator has said the same thing: AI doesn't create a regulatory exemption. If you're distributing insurance through AI, you must comply with the same rules as if you were distributing through a human broker or a website.

  • EIOPA published its Opinion on AI Governance and Risk Management in August 2025, making explicit that Solvency II, the IDD, and DORA all apply to AI systems used in insurance, and that the principle of proportionality governs how strictly
  • The FCA confirmed it won't introduce AI-specific regulations, relying instead on its existing framework, particularly the Consumer Duty[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop
  • The NAIC adopted its Model Bulletin on AI in December 2023; as of early 2026, 24 US states have adopted it, requiring insurers to maintain written AI governance programs

The message is unanimous: don't wait for AI-specific rules. The rules already exist. Apply them.

Life and health insurance AI is explicitly high-risk under the EU AI Act

The EU AI Act, the world's first comprehensive AI law, enters its critical enforcement phase on August 2, 2026. Annex III explicitly classifies AI systems used for "risk assessment and pricing in relation to individuals in the case of life and health insurance" as high-risk.

High-risk means:

  • Mandatory risk management systems
  • Data governance and quality requirements
  • Technical documentation
  • Comprehensive logging
  • Human oversight mechanisms
  • Accuracy, robustness, and cybersecurity standards
  • Pre-market conformity assessment

For non-life insurance (home, auto), the classification is less clear-cut. The European Commission's guidelines were expected in early 2026 but may be delayed alongside the Digital Omnibus proposal, which could push some high-risk deadlines to December 2027.

But "might be delayed" is not a compliance strategy. Prudent carriers are treating August 2026 as the binding deadline.

AI disclosure requirements are non-negotiable

Across every jurisdiction, one thing is consistent: if a consumer is interacting with AI, they need to know it.

The EU AI Act requires clear identification when a person is interacting with AI rather than a human. The IDD requires carrier identity, product information (IPID), and human contact options regardless of channel. Colorado's SB 24-205 (effective February 1, 2026) adds bias-prevention obligations and consumer disclosure requirements specifically for AI systems.

The challenge isn't knowing what disclosures are required; it's ensuring the LLM actually delivers them. A model can be instructed to include specific disclaimers, but it can also paraphrase them, truncate them, or skip them entirely if it judges them irrelevant to the conversational flow.

The carrier holds the insurance license, and the liability

In the current architecture, the licensed carrier is the regulated entity. Not the AI platform. Not the infrastructure provider. Not the model.

This means the carrier is responsible for:

  • Verifying that the demands-and-needs test is satisfied before a product is offered
  • Ensuring no unauthorized advice is given
  • Delivering all mandatory disclosures
  • Maintaining audit trails
  • Preventing unfair discrimination in AI-driven pricing
  • Providing human fallback options

Every one of these obligations was designed for a world where the carrier controlled the entire interaction. In AI distribution, the carrier controls a fraction of it.

What could change in AI insurance regulation

Third-party AI vendor licensing

The NAIC's Third-Party Data and Models Working Group adopted a broad definition of "third party" in 2025, covering any non-governmental entity providing data, models, or outputs for insurance activities. A model law on third-party oversight is anticipated in 2026, and it could include licensing requirements for AI vendors.

If this happens, technology providers that build AI distribution infrastructure may need their own regulatory status. The line between "technology vendor" and "insurance intermediary" could shift significantly.

The EU Digital Omnibus could redraw compliance timelines

The European Commission's Digital Omnibus proposal (late 2025) could extend high-risk AI enforcement deadlines by linking them to the availability of harmonized standards and compliance support tools. The backstop dates: December 2027 for Annex III systems, August 2028 for product-embedded AI.

This isn't a free pass; it's a conditional extension that depends on standards bodies (CEN-CENELEC) delivering harmonized standards. And EIOPA has made clear that existing insurance regulations apply regardless of AI Act timelines.

Agentic AI in insurance will force regulatory action

The NAIC flagged the rise of agentic AI in insurance in 2025, autonomous systems capable of performing insurance tasks without human input. As AI moves from "chatbot that answers questions" to "agent that can bind coverage," the regulatory pressure will intensify.

The current framework assumes a human makes the final decision. When the AI is the decision-maker, every existing assumption about oversight, accountability, and consumer protection will need to be revisited.

Platform-level regulation for AI distribution is coming

Right now, LLM platforms (OpenAI, Anthropic, Google) are largely unregulated as insurance distribution channels. They're treated as infrastructure, not intermediaries.

But as more insurance products are distributed through these platforms, regulators will inevitably ask: should the platform bear some responsibility for what happens in the conversation? Should it be required to preserve and share conversation logs for regulatory purposes? Should it be licensed?

This is an open question today. It won't be for long.

Cross-border AI insurance compliance will become a flashpoint

An AI chatbot has no borders. A user in France can ask Claude for a Spanish insurance estimate. A German consumer can get a US-licensed product recommended by a model hosted in Ireland.

Insurance licenses are jurisdictional. AI conversations are not. The collision between borderless AI and bordered regulation is inevitable, and no framework currently addresses it comprehensively.

What this means for insurance carriers

The grey zone won't last forever. Regulations are coming: from EIOPA, from the EU AI Act enforcement, from state-level action in the US, from the NAIC's upcoming model law on third-party vendors.

Carriers that wait for regulatory clarity before acting will find themselves scrambling. Those that build compliance infrastructure now (logging, disclosure enforcement, advice-boundary controls, synthetic testing, jurisdiction gating) will have a structural advantage.

The question isn't whether AI will distribute insurance. It already does. The question is whether your compliance infrastructure is built for a world where the most important conversation with your customer happens inside someone else's AI.

WaniWani builds AI distribution infrastructure for insurance carriers, including the compliance layer that makes it possible. Learn more at waniwani.ai