Skip to content

mkumar84/InsuranceClarityAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Insurance Clarity AI

Understand your Canadian home or auto insurance policy in plain English.

Live product → cover-clarity-ai.lovable.app
PM Spec (Artifact #1) → Available on request
Build-in-public post → Medium — I built and shipped an AI product in one day


What it does

Insurance Clarity AI lets any Canadian upload their personal home or auto insurance policy PDF and ask plain-language questions about their coverage. Every answer cites the exact clause and page number it draws from.

It also automatically surfaces the top exclusions in your specific policy — the things most commonly misunderstood — without you having to ask.


The problem it solves

Insurance is one of the most consequential financial products most Canadians hold. It is also one of the least understood.

  • 52% of Canadians find their home or auto policy difficult to understand (belairdirect / Intact Financial national survey)
  • 23% have never read their policy at all — yet their top insurance worry is what their policy covers (same survey)
  • 4 in 10 Canadians with home insurance believe their policy automatically protects all valuables. Only 36% know sewer backup is covered (same survey)
  • A 2024 NAIC survey found only 27% of Gen Z adults can correctly define "deductible"
  • 41% of Canadians visiting emergency departments said their last visit was for a condition that could have been treated in primary care, if access had been available (CIHI)

People find out what their policy doesn't cover at the worst possible moment — after the flood, after the collision, after the denial.

No consumer-facing tool currently lets a Canadian upload their specific policy and ask plain-language questions about it. Not a FAQ. Not a chatbot trained on generic insurance knowledge. Something that reads their document.


Why AI, not a rule-based system

This is the question I made myself answer before writing a line of code.

Approach Why it fails
Static FAQ Cannot reason over a document it hasn't seen. Every policy has different wording, exclusions, and limits.
Rule-based chatbot Cannot handle open-ended natural language. Cannot parse variable PDF formatting.
Human broker Inaccessible at the moment of need — evenings, weekends, renewal time.
LLM with document context Reads any uploaded document, handles open-ended questions, cites specific sources, scales to any insurer.

Policy language is variable, dense, and context-dependent. Only a language model can reason across a document it has never seen before.


Key product decisions and tradeoffs

Decision 1: Citation-first architecture
Every answer must include a source citation ([Page X, Section Y]) or it does not get shown. This was the hardest product decision — language models are fluent without being accurate. Enforcing citation at the prompt level is a starting point. A response validation layer (parsing the output before rendering) is on the v2 roadmap.

Decision 2: Persistent "not legal advice" framing
The disclaimer is visible before upload, not just after an answer is given. This was a deliberate trust design choice — the framing needed to be part of the product's identity, not a footnote. I learned this by seeing the first build without it and feeling the absence.

Decision 3: Session-only data processing
Policy PDFs are processed in-session only. No retention of document content after the session ends. This reduces PIPEDA exposure and is foundational to user trust in a regulated domain.

Decision 4: Scope gate on upload
The product detects commercial, business, life, and group insurance policies on upload and redirects with a clear message rather than attempting to answer. Scope discipline is a product quality decision, not just a technical one.

Tradeoff acknowledged: v1 citation enforcement is prompt-level only — the model is instructed to cite sources but the output is not programmatically validated before rendering. This means a confident but uncited response could theoretically surface. V2 will add a validation layer. This is documented here because I think it matters to say it publicly.


Failure modes designed for

Before building, I wrote six failure modes and a product response for each. This is the section most AI products skip.

Failure mode Design response
Hallucination — AI states coverage that doesn't exist Every answer must cite page and clause. If answer cannot be grounded, product says so explicitly.
Overconfidence — user treats output as legal advice "Not legal or insurance advice" framing persistent on every screen.
Unreadable or scanned PDF Graceful fallback with friendly error message. Text paste alternative offered.
Scope mismatch — commercial/life/group policy uploaded Detected on upload. Clear redirect message before user invests time.
Misinterpretation Every answer ends with "here's what to ask your broker." Human stays in the loop.
Adversarial input Rate limiting and input validation from day one.

Tested in v1: scanned PDF fallback ✓, question-without-upload prompt ✓
Not yet tested: commercial policy detection at scale, very large documents (80+ pages)


What I'd build differently in v2

  1. Response validation layer — parse output programmatically before rendering, reject responses without citation pattern
  2. Bilingual support — Ontario and Quebec users may have English/French policies
  3. Multi-policy comparison — let users upload two policies and ask "which covers X better?"
  4. Citation deep-link — click a citation and jump to that page in the uploaded PDF

Tech stack

Layer Tool Why
UI + deployment Lovable Fastest path from PM spec to working product for a non-engineer PM
LLM Claude API (Anthropic) Document reasoning, citation grounding, open-ended Q&A
Prompt prototyping Google AI Studio Free-tier iteration before production wiring
Version control GitHub This repo
Hosting Lovable / Vercel Public URL for portfolio and user testing

Regulatory context

This product operates in the Canadian personal insurance space. Relevant frameworks considered in design:

  • PIPEDA / Bill C-27 — no policy data retained after session
  • FSRA consumer protection principles — transparency, accessibility, fairness
  • FCAC financial literacy mandate — product language targets Grade 8 reading level
  • "Not legal advice" framing — persistent, not footnoted

This is a consumer-facing tool, not an insurer-facing one. That distinction is a trust design decision.


Portfolio context

This is Portfolio Project 1 in a 90-day public AI product portfolio build.

The goal is to prove that a product person with domain depth, a habit of writing failure modes before building, and the discipline to document tradeoffs publicly makes better AI products than someone who just adds "AI" to a feature list.

Project 2 — Healthcare navigation tool (care-nav AI for Canadians without a family doctor). Claude Code. Coming Week 3.


Public references

All problem evidence sourced from public research only. No proprietary information.

  • belairdirect / Intact Financial — Canadian insurance literacy national survey
  • NAIC Consumer Survey, 2024 — insurance literacy statistics
  • CBC Radio / Cost of Living, 2026 — Canadian insurance behaviour data
  • Financial Consumer Agency of Canada (FCAC) — consumer protection mandate
  • Financial Services Regulatory Authority of Ontario (FSRA) — insurance regulation
  • Gill v The Wawanesa Mutual Insurance Company, 2023 BCCA 97 — policy interpretation complexity
  • Canadian Institute for Health Information (CIHI) — healthcare access data
  • Canadian Medical Association (CMA) — primary care access statistics

Built by Mahesh — AI Product Leader, Canadian FinServ
Follow the build: LinkedIn | Medium

About

Upload your PDF, ask anything, every answer cites the exact clause and page.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors