How to Build a customer support Agent Using LangGraph in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphtypescriptinsurance

A customer support agent for insurance needs to do more than answer FAQs. It has to classify intent, retrieve policy-specific context, handle claims or billing questions safely, and escalate when the request crosses into regulated advice or sensitive account actions.

For insurance teams, that matters because the wrong answer is not just a bad UX problem. It can create compliance issues, violate audit requirements, or expose customer data outside the right residency boundary.

Architecture

  • Input layer

    • Receives chat messages from web, mobile, or contact-center channels
    • Normalizes user identity, policy number, locale, and consent flags
  • State model

    • Stores conversation history, extracted intent, policy context, and escalation status
    • Keeps the workflow deterministic across nodes
  • Router node

    • Classifies the request into claims, billing, policy info, document upload help, or human escalation
    • Prevents every message from going straight to the LLM
  • Retrieval layer

    • Pulls from approved sources only: policy docs, coverage summaries, claims process docs, and internal SOPs
    • Filters by jurisdiction and product line
  • Response generation node

    • Drafts the customer-facing response using retrieved context
    • Applies tone constraints and insurance disclaimers where needed
  • Escalation node

    • Hands off to a human agent for complaints, denial disputes, coverage interpretation, fraud signals, or PII-sensitive actions

Implementation

1) Define state and build the graph skeleton

Use Annotation.Root to define a typed state shape. In insurance workflows, you want explicit fields for routing decisions and auditability instead of burying everything in free-form messages.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

const AgentState = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (left = [], right = []) => left.concat(right),
    default: () => [],
  }),
  intent: Annotation<string>({
    reducer: (_, next) => next,
    default: () => "unknown",
  }),
  retrievedContext: Annotation<string>({
    reducer: (_, next) => next,
    default: () => "",
  }),
  needsEscalation: Annotation<boolean>({
    reducer: (_, next) => next,
    default: () => false,
  }),
});

type AgentStateType = typeof AgentState.State;

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function routeNode(state: AgentStateType) {
  const last = state.messages[state.messages.length - 1];
  const text = typeof last?.content === "string" ? last.content : "";

  const lower = text.toLowerCase();
  let intent = "general";

  if (lower.includes("claim")) intent = "claims";
  else if (lower.includes("bill") || lower.includes("premium")) intent = "billing";
  else if (lower.includes("cancel") || lower.includes("coverage")) intent = "policy";

  return {
    intent,
    needsEscalation:
      lower.includes("lawsuit") ||
      lower.includes("complaint") ||
      lower.includes("deny") ||
      lower.includes("fraud"),
    messages: [new AIMessage(`Routed as ${intent}`)],
  };
}

2) Add retrieval against approved insurance content

In production you would replace this stub with a vector store or document service restricted to approved artifacts. The important part is that the graph only retrieves from sources you can defend in an audit.

async function retrieveNode(state: AgentStateType) {
  if (state.needsEscalation) {
    return { retrievedContext: "" };
  }

  const contextByIntent: Record<string, string> = {
    claims:
      "Claims can be filed online within the member portal. Required documents include incident details, invoices, and any police report where applicable.",
    billing:
      "Premium payments can be made by card or ACH. Failed payments may trigger a grace period depending on jurisdiction.",
    policy:
      "Coverage changes are effective only after underwriting review and confirmation in the policy schedule.",
    general:
      "Use approved support articles only. Do not provide legal advice or interpret ambiguous coverage terms.",
  };

  return {
    retrievedContext: contextByIntent[state.intent] ?? contextByIntent.general,
  };
}

3) Generate a constrained response and escalate when needed

This is where LangGraph gives you control over flow. The generator should use retrieved context only; if escalation is required, short-circuit to a handoff message.

async function respondNode(state: AgentStateType) {
  if (state.needsEscalation) {
    return {
      messages: [
        new AIMessage(
          "I’m transferring this to a licensed support specialist because this request needs human review."
        ),
      ],
    };
  }

	const prompt = [
	  {
	    role: "system" as const,
	    content:
	      "You are an insurance support assistant. Use only the provided context. Do not invent coverage details. If unsure, escalate.",
	  },
	  ...state.messages.map((m) => ({
	    role: m._getType() === "human" ? ("user" as const) : ("assistant" as const),
	    content: String(m.content),
	  })),
	  {
	    role: "system" as const,
	    content: `Approved context:\n${state.retrievedContext}`,
	  },
	];

	const result = await model.invoke(prompt);

	return {
	  messages: [result],
	};
}

const graph = new StateGraph(AgentState)
	.addNode("route", routeNode)
	.addNode("retrieve", retrieveNode)
	.addNode("respond", respondNode)
	.addEdge(START, "route")
	.addEdge("route", "retrieve")
	.addEdge("retrieve", "respond")
	.addEdge("respond", END);

export const app = graph.compile();

4) Invoke it with traceable inputs

Pass metadata like policy ID only if your storage and logs are compliant with your data handling rules. For insurance workloads, keep PII out of prompts unless it is required for the task.

async function main() {
	const result = await app.invoke({
	  messages: [new HumanMessage("Can you explain my claim status?")],
	  intent: "unknown",
	  retrievedContext: "",
	  needsEscalation: false,
	});

	console.log(result.messages.at(-1)?.content);
}

main();

Production Considerations

  • Audit logging

    • Log routed intent, retrieval source IDs, escalation reason, and final action taken
    • Store enough detail for compliance review without dumping raw PII into logs
  • Data residency

    • Keep embeddings, chat transcripts, and retrieval indexes inside the required region
    • Make sure your model endpoint and telemetry pipeline follow the same residency rules
  • Guardrails

    • Block unsupported topics like legal advice on coverage disputes or definitive claim denials
    • Force escalation when confidence is low or when users mention fraud, litigation, or complaints
  • Operational monitoring

    • Track escalation rate by intent category
    • Watch for hallucination patterns in responses about deductibles, exclusions, waiting periods, and cancellation terms

Common Pitfalls

  1. Letting the LLM answer from memory

    • Insurance support must be grounded in approved documents.
    • Avoid this by routing through retrieval first and constraining generation to returned context.
  2. Using one generic path for every request

    • Claims status questions and billing disputes need different handling.
    • Avoid this by splitting state into explicit intents and branching early with StateGraph.
  3. Logging sensitive customer data everywhere

    • Raw transcripts often contain policy numbers, addresses, dates of birth, and claim details.
    • Avoid this by redacting at ingestion time and storing only what compliance requires.
  4. Skipping escalation rules

    • Some requests should never be fully automated.
    • Avoid this by hard-coding escalation triggers for complaints,, denials,, fraud signals,, and ambiguous coverage interpretation.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides