AI DistributionInsuranceResearch

Do Conversational Quoting Flows Actually Convert Better Than Forms? Here Is What the Research Shows.

WaniWani
·
Do Conversational Quoting Flows Actually Convert Better Than Forms? Here Is What the Research Shows.

Last updated: 2026-03-20

Conversational quoting is the practice of collecting customer information through a dialogue, either with an AI on your site or inside an LLM like ChatGPT, rather than through a traditional form. The question is whether it actually works better.

84% of insurance leads abandon their quote before finishing it (ProPair, 2025). That is the highest abandonment rate of any sector. The industry has spent decades trying to fix this with better forms, fewer fields, progress bars, smarter landing pages. None of it has moved the needle meaningfully.

Meanwhile, the interface is shifting. 33% of U.S. adults have already used ChatGPT for financial advice (Express Legal Funding, 2025). OpenAI has over 800 million weekly active users. And in February 2026, the first real-time insurance quotes went live inside ChatGPT.

The question for insurers, and anyone selling complex, quote-based products, is no longer how to optimise the form. It is whether the form is the right interface at all.

We reviewed peer-reviewed studies, SEC filings, controlled experiments, and legal precedents. Here is what the evidence actually shows.

Why Do 84% of Insurance Leads Abandon Quote Forms?

A form asks you to translate your situation into structured data. It assumes you know what coverage you need, what fields to expect, and that you are committed enough to push through 15 to 20 questions. For visitors who arrive still in discovery, still comparing, still unsure, a form is a dead end. They have questions. A form cannot answer questions.

The data confirms this. Forms with more than seven fields see 67.8% abandonment (Formstack, 2025). In insurance, it hits 84%. Each additional field is friction. On mobile, it is worse.

The industry response has been incremental: fewer fields, conditional logic, multi-step layouts. These help at the margin. But they do not solve the structural problem: a form is the wrong experience for someone who is not yet sure what they want.

Do Conversational Interfaces Actually Convert Better Than Forms?

Yes. The evidence is consistent across peer-reviewed research, platform data, and real-world deployments.

A peer-reviewed study in Frontiers in Digital Health (Soni et al., 2022, n=206) compared a chatbot to a traditional online form for collecting the same structured health data. The chatbot scored an NPS of 24, versus 13 for the form (p<0.001). 69.9% of participants preferred it. The chatbot took longer to complete, and people still liked it more. The experience mattered more than speed.

Typeform, whose one-question-at-a-time interface is the closest mainstream product to a conversational flow, reports a 47.3% average completion rate across 2.6 million forms (Typeform Data On Data Report, 2024). Independent benchmarking from Zuko (93 million sessions) puts traditional form completion at roughly 21%. Different measurement methods, but the direction is consistent: conversational flows complete at roughly double the rate.

Lemonade, the most visible case of conversational quoting in insurance, built its entire model around a chatbot called Maya. Their SEC filings tell the story: 96% of first notices of loss handled by AI without human intervention, roughly 55% of claims fully automated, and pet insurance growing 55% year-over-year to $439 million (Q4 2025 shareholder letter, confirmed by Insurance Journal). Their NPS was independently measured at 79 by Clearsurance in 2019, second only to USAA among 67 insurers.

Interface typeCompletion rateUser preferenceBest for
Traditional form~21% (Zuko)30% (Soni et al.)Users who know exactly what they want
Structured conversational (Typeform-style)~47% (Typeform)70% (Soni et al.)Guided flows with predefined paths
AI-powered conversationHigher (emerging data)Highest when well-designedDiscovery, complex products, LLM platforms

The pattern is clear. Conversational interfaces outperform forms on completion, satisfaction, and preference.

How Does AI Take Conversational Quoting Further?

The evidence above validates structured conversational flows: one question at a time, predefined branching. Better than forms, but still rigid. The customer follows your script.

AI conversation is different. It adapts. It answers questions. And it opens two opportunities that scripted flows cannot touch.

AI collects more accurate data

AI skips questions it already has answers to, probes vague responses, and rephrases confusing questions. If someone says “I just bought a 1920s colonial in Austin,” the AI does not need to separately ask for property type, construction year, location, or purchase date. A structured flow asks every question regardless.

NORC at the University of Chicago tested this directly (n=1,200, November 2024). When AI generated adaptive follow-up questions based on previous answers, responses became significantly more specific and detailed. The catch: aggressive probing, especially early in the conversation, increased attrition. AI adaptation works when it feels like a natural follow-up, not an interrogation.

There is also an honesty effect. A systematic review of 26 empirical studies (Papneja & Yadav, Personal and Ubiquitous Computing, 2024) found that the majority show greater self-disclosure to conversational agents than to forms or humans. The effect is strongest on sensitive topics: health conditions, risk behaviours, financial history. These are exactly the inputs insurance applicants routinely underreport. Conversational AI reduces the impression management that causes people to leave things out. For underwriting, that translates to more accurate risk pricing and fewer surprises at claims time.

AI meets the customer where they already are

This is the bigger shift. A conversational flow on your site is still an improvement. But it still requires the customer to find your site, navigate to your quoting page, and start your process.

When quoting happens inside ChatGPT, Claude, or Gemini, the customer is already in a conversation. They have already expressed intent (“I need home insurance for a property in Austin”). Your product can be surfaced, information collected through natural dialogue, and a personalised quote returned, all without the customer leaving the platform where they started.

No form. No redirect. No friction. Just a conversation that ends with a quote.

How AI Conversations Convert Discovery-Phase Visitors on Your Site

Not every opportunity starts with high intent. Many visitors land on your site with half-formed questions. They are curious, not committed. A traditional funnel gives them two paths: browse static content and hope it answers their questions, or jump into a quote form before they are ready.

Most choose a third option. They leave. And they are right to: why would you commit to a 15-field form when you are not even sure you need the product?

An AI conversation changes this. The visitor asks “Do I need flood coverage in Austin?” or “What is the difference between replacement cost and actual cash value?” and gets an immediate, personalised answer. The conversation handles objections, explains product details, and builds confidence. It guides the visitor toward a quote not by forcing them through a funnel, but as a natural outcome of answering their questions.

This is not a scripted chatbot that pops up with “How can I help?” It is a substantive product expert that knows your offering, your pricing logic, and your compliance rules. For discovery-phase visitors, this converts where a form cannot: it offers value before asking for commitment.

WaniWani deploys exactly this kind of AI app on client websites. Each app is trained on the client’s specific products, pricing, and regulatory requirements. Not a generic chatbot, but a product advisor that has a real conversation and guides visitors from curiosity to quote.

How Quoting Inside LLMs Captures Intent at the Source

The larger opportunity is outside your site entirely. When a consumer asks ChatGPT for insurance advice, that is a high-intent moment happening in someone else’s platform. If your product is not quotable there, you are not in the consideration set.

In February 2026, Tuio became the first insurer to offer real-time quotes inside ChatGPT, through a sales MCP (Model Context Protocol) built by WaniWani. Jerry.ai embedded insurance comparison in the same platform. More than a dozen financial services AI apps are now in approval pipelines across LLM platforms.

The purchasing interface is moving. The providers who make their products quotable inside AI platforms will capture a channel where the buyer’s intent is already expressed. The ones who wait will compete for whatever traffic still reaches their website.

WaniWani is the infrastructure layer for this. Sales MCPs connect to ChatGPT, Claude, Gemini, and every major AI assistant. They answer customer questions, generate quotes from the client’s actual backend systems, and capture leads, all inside the AI conversation. One infrastructure layer, every platform. Request a demo to see it in action.

What Are the Risks of AI-Powered Quoting?

AI-powered quoting is not risk-free. Three risks deserve attention, and all three are solvable.

Hallucination. AI can generate confident, specific, and completely wrong information. In a quoting context, that means fabricated coverage terms, incorrect conditions, or invented pricing. Insurance quotes involve exactly the content types most prone to hallucination: numbers, dates, compliance positions, and pricing. The fix: quotes must come from backend pricing engines, never from AI generation.

Legal liability. In Moffatt v. Air Canada (2024), a British Columbia tribunal ruled that Air Canada was liable for its chatbot providing incorrect bereavement fare information. Air Canada argued the chatbot was “a separate legal entity.” The tribunal rejected this. Companies are responsible for what their AI says. The decision was small claims, not binding precedent, but the principle aligns with the direction of AI regulation everywhere.

The trust penalty. A Marketing Science study (Luo et al., 2019) found that disclosing chatbot identity reduced purchase rates by 79.7%. This finding predates the ChatGPT era, and prior AI experience moderates the effect. Inside LLM platforms, the dynamic reverses entirely: the customer already chose to talk to an AI. There is no disclosure shock. Regulation is also making bot disclosure mandatory (Maine’s Chatbot Disclosure Act, effective September 2025), so the question is not whether to disclose but whether the AI is good enough that disclosure does not matter.

All three risks point to the same architecture: semi-structured AI. Define the required data fields. Let AI collect them conversationally. Constrain the AI from inventing prices, making coverage promises, or going off-script on anything compliance-sensitive. Quotes come from backend systems. Deterministic checkpoints enable measurement and optimisation. The conversation is flexible; the outputs are controlled.

This is how WaniWani builds every AI app it deploys, on client sites and across LLM platforms. The same constraints, the same backend integration, the same compliance guardrails. Flexible conversation, reliable outputs.

The Bigger Picture

McKinsey’s analysis of U.S. P&C carriers found that one digital-first insurer converted prospects at six times the rate of a better-known rival, after adjusting for demographics (circa 2017). The advantage came from personalisation, targeting, and simplified experiences.

AI distribution amplifies all three. Personalisation is native to conversation. Targeting happens at the moment of intent. And the experience is as simple as asking a question.

The form worked when it was the only option. It was never a good experience; it was just the interface we had. Now there is a better one. The companies that adopt it, on their own sites for discovery-phase visitors and inside AI platforms for high-intent consumers, will convert more customers at lower cost. The ones that keep optimising their forms will wonder where the traffic went.

FAQ

Do conversational quoting flows convert better than traditional forms?

Yes. A peer-reviewed study (Soni et al., 2022) found conversational interfaces achieved an NPS of 24 versus 13 for forms (p<0.001), with 70% of participants preferring the chatbot. Completion rate data from Typeform (47.3%) and Zuko (21% for traditional forms) suggests roughly double the completion rate. Insurance has the highest form abandonment of any industry at 84%.

What is the difference between an AI conversation on my site and quoting inside an LLM?

An on-site AI conversation helps visitors still in discovery: answering questions, handling objections, guiding them toward a quote before they are ready for a form. Quoting inside an LLM like ChatGPT captures consumers who never visit your site, meeting them at the moment they express intent. They cover different stages of the same buyer journey.

Does AI make people more honest on insurance applications?

For sensitive topics, the evidence suggests yes. A systematic review of 26 studies (Papneja & Yadav, 2024) found greater self-disclosure to conversational agents in the majority of cases, driven by reduced social desirability bias. This matters for insurance, where applicants routinely underreport risk factors like health conditions and prior claims.

What are the risks of AI-powered quoting?

Three main risks: hallucination (generating incorrect terms or pricing), legal liability (companies are responsible for what their AI says, per the Air Canada ruling), and reduced testability (non-deterministic conversations are harder to A/B test). Semi-structured AI, where quotes come from backend pricing engines and the AI is constrained from making promises, addresses all three.

What is a sales MCP?

A sales MCP (Model Context Protocol) is a service that connects a company’s product directly to AI platforms like ChatGPT, Claude, and Gemini. It enables the AI to answer product questions, generate personalised quotes from the company’s backend systems, and capture leads, all inside the conversation. It is the infrastructure that makes products quotable and purchasable in AI assistants. WaniWani builds and deploys sales MCPs for insurance, financial services, and other quote-based industries.

How should companies design AI quoting flows?

Semi-structured AI is the architecture the evidence supports. Define required data fields. Let AI collect them through natural conversation. Constrain the AI from making coverage promises or generating prices independently. Maintain deterministic checkpoints for measurement. This architecture works on your own website for discovery-phase visitors and inside LLM platforms for high-intent consumers.


Sources

  1. ProPair. “The Hidden Cost of Slow Response.” 2025.
  2. Express Legal Funding. “33% of U.S. adults have used ChatGPT for financial advice.” 2025.
  3. Formstack. “Form Conversion Report.” 2025.
  4. Soni, H. et al. “Virtual conversational agents versus online forms.” Frontiers in Digital Health, 2022; 4:954069.
  5. Typeform. “Data On Data Report.” January 2024.
  6. Zuko. “Industry Benchmarking.” 93 million tracked sessions.
  7. Lemonade Inc. 10-K Annual Report, filed February 2026.
  8. Lemonade Inc. Q4 2025 Shareholder Letter.
  9. Insurance Journal. “Lemonade Q4 Results.” February 2026.
  10. Clearsurance. “Most Innovative Insurer of 2019.”
  11. Barari, S. et al. “Generative AI Can Enhance Survey Interviews.” NORC, November 2024.
  12. Papneja, H. & Yadav, N. “Self-disclosure to conversational AI.” Personal and Ubiquitous Computing, 2025; 29:119-151.
  13. Moffatt v. Air Canada, 2024 BCCRT 149.
  14. Luo, X. et al. “Machines vs. Humans.” Marketing Science, 2019.
  15. McKinsey Digital. “How smart insurers convert digital customers at six times the rate of their peers.” Circa 2017.