How to Build a policy Q&A Agent Using LangGraph in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
policy-q-alanggraphtypescriptinsurancepolicy-qanda

A policy Q&A agent answers customer or agent questions against insurance policy documents, endorsements, and product guides, then returns grounded responses with citations. For insurance, that matters because the difference between a correct answer and an ungrounded one is usually compliance exposure, bad customer outcomes, and a messy audit trail.

Architecture

  • Chat input node

    • Accepts the user question plus conversation context.
    • Normalizes the request into a state object LangGraph can pass through the graph.
  • Policy retrieval layer

    • Queries a vector store or document index for policy clauses, exclusions, limits, and endorsements.
    • Returns short evidence chunks with source metadata.
  • Answer generation node

    • Uses an LLM to draft an answer strictly from retrieved policy text.
    • Produces citations so the response can be audited later.
  • Guardrail / compliance node

    • Checks for unsupported claims, coverage advice beyond source text, and sensitive data leakage.
    • Routes low-confidence cases to escalation instead of guessing.
  • Human handoff node

    • Escalates ambiguous or high-risk questions to a licensed adjuster or service rep.
    • Preserves conversation history and retrieved evidence for review.
  • Audit logging

    • Stores question, retrieved sources, model output, confidence signals, and final decision.
    • Needed for regulated environments and internal QA.

Implementation

1) Define the graph state and dependencies

Use Annotation.Root to define typed state. Keep the state small: question, retrieved docs, draft answer, and a route flag for escalation.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage } from "@langchain/core/messages";

type RetrievedDoc = {
  id: string;
  text: string;
  source: string;
};

const GraphState = Annotation.Root({
  question: Annotation<string>(),
  docs: Annotation<RetrievedDoc[]>({
    default: () => [],
    reducer: (_, next) => next,
  }),
  answer: Annotation<string>({
    default: () => "",
    reducer: (_, next) => next,
  }),
  route: Annotation<"answer" | "escalate">({
    default: () => "answer",
    reducer: (_, next) => next,
  }),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

2) Add retrieval and answer nodes

In production you would swap the mock retrieval with Pinecone, pgvector, Elasticsearch, or your document service. The important part is that retrieval returns source text plus metadata.

async function retrievePolicies(state: typeof GraphState.State) {
  const q = state.question.toLowerCase();

  const docs: RetrievedDoc[] = [
    {
      id: "policy-001",
      source: "Auto Policy v3 §4.2",
      text:
        q.includes("deductible")
          ? "Collision deductible is $500 unless amended by endorsement."
          : "Coverage applies subject to declarations page limits.",
    },
    {
      id: "policy-002",
      source: "Home Policy v2 §7.1",
      text:
        q.includes("flood")
          ? "Flood damage is excluded unless separate flood coverage is purchased."
          : "Personal property limits are shown on the declarations page.",
    },
  ];

  return { docs };
}

async function draftAnswer(state: typeof GraphState.State) {
  const context = state.docs
    .map((d) => `[${d.source}] ${d.text}`)
    .join("\n");

  const prompt = [
    {
      role: "system" as const,
      content:
        "You answer insurance policy questions using only provided policy excerpts. If the excerpts do not support a direct answer, say you need escalation. Always cite sources in brackets.",
    },
    {
      role: "user" as const,
      content: `Question: ${state.question}\n\nPolicy excerpts:\n${context}`,
    },
  ];

  const result = await llm.invoke(prompt);
  return { answer: (result as AIMessage).content.toString() };
}

3) Add a compliance check and route to escalation

This is where insurance-specific control lives. If the model starts making unsupported coverage determinations or there are no relevant docs, route to a human.

async function complianceGate(state: typeof GraphState.State) {
  const hasDocs = state.docs.length > 0;
  const mentionsUnsupportedCoverage =
    /guaranteed|covered for sure|always covered/i.test(state.answer);

  if (!hasDocs || mentionsUnsupportedCoverage) {
    return { route: "escalate" as const };
  }

  return { route: "answer" as const };
}

async function escalate() {
  return {
    answer:
      "I need to escalate this policy question to a licensed representative for review.",
    route: "escalate" as const,
  };
}

4) Wire the graph and invoke it

StateGraph gives you explicit control over routing. That matters when you need an auditable path through retrieval, generation, and escalation.

const graph = new StateGraph(GraphState)
  .addNode("retrievePolicies", retrievePolicies)
  .addNode("draftAnswer", draftAnswer)
  .addNode("complianceGate", complianceGate)
   .addNode("escalate", escalate)
   .addEdge(START, "retrievePolicies")
   .addEdge("retrievePolicies", "draftAnswer")
   .addEdge("draftAnswer", "complianceGate")
   .addConditionalEdges("complianceGate", (state) => state.route, {
     answer: END,
     escalate: "escalate",
   })
   .addEdge("escalate", END);

const app = graph.compile();

const result = await app.invoke({
   question: "Is flood damage covered under my home policy?",
});

console.log(result.answer);

Production Considerations

  • Audit every response

    • Persist the user question, retrieved clauses, model output, route taken in the graph, timestamp, and operator identity if escalated.
    • Insurance teams will ask for traceability when a customer disputes an answer.
  • Enforce data residency

    • Keep policy data and logs inside approved regions.
    • If you serve EU or APAC customers, make sure your vector store and LLM endpoints meet local residency requirements.
  • Add hard guardrails

    • Block medical details, payment card data, claim notes unrelated to the question, and any unsupported coverage promises.
  • Monitor retrieval quality

Common Pitfalls

  • Letting the model answer without evidence

Avoid it: require retrieved clauses before generation. If retrieval returns nothing relevant, escalate instead of asking the model to infer coverage.

  • Treating all policy questions as equal

Avoid it: classify high-risk intents like exclusions, claim denials, lapse status, cancellation rules, and premium changes separately. Those often need stricter controls or human review.

  • Skipping citation storage

Avoid it: store exact source IDs and clause text used in each response. In insurance operations this is not optional; it is how you defend an answer during audit or complaint handling.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides