How to Build a underwriting Agent Using LangChain in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
underwritinglangchaintypescriptinsurance

A underwriting agent automates the first pass of risk evaluation for insurance applications. It reads applicant data, checks policy rules, asks for missing information, and returns a structured recommendation that underwriters can review instead of starting from scratch.

Architecture

A production underwriting agent needs a few concrete pieces:

  • Input normalizer
    • Converts application payloads from CRM, web forms, or broker submissions into a consistent schema.
  • Policy retrieval layer
    • Pulls underwriting guidelines, appetite rules, and exclusions from a controlled knowledge base.
  • LLM decision chain
    • Uses LangChain to classify risk, identify missing fields, and produce a recommendation with rationale.
  • Tool layer
    • Calls internal systems for claims history, KYC/AML checks, address validation, and fraud signals.
  • Audit logger
    • Stores prompts, retrieved policy snippets, tool outputs, and final recommendations for compliance review.
  • Human approval gate
    • Routes edge cases and high-risk submissions to an underwriter before any decision is finalized.

Implementation

1) Define the underwriting schema

Keep the output structured. Insurance workflows break when the model returns prose instead of fields you can persist or review.

import { z } from "zod";

export const UnderwritingDecisionSchema = z.object({
  riskBand: z.enum(["low", "medium", "high"]),
  decision: z.enum(["approve", "refer", "decline"]),
  reasons: z.array(z.string()).min(1),
  missingInformation: z.array(z.string()),
  complianceFlags: z.array(z.string()),
});

export type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;

export const ApplicationSchema = z.object({
  applicantName: z.string(),
  productType: z.string(),
  country: z.string(),
  annualRevenue: z.number().optional(),
  yearsInBusiness: z.number().optional(),
  priorClaimsCount: z.number().optional(),
});

2) Build the LangChain prompt and model chain

Use ChatOpenAI, ChatPromptTemplate, and StructuredOutputParser patterns where possible. For strict output in production, I prefer withStructuredOutput on chat models that support it.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { ApplicationSchema, UnderwritingDecisionSchema } from "./schemas";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    [
      "You are an insurance underwriting assistant.",
      "Apply the insurer's appetite rules strictly.",
      "Do not invent facts.",
      "If required data is missing, mark the case as refer.",
      "Return only valid structured output.",
    ].join(" "),
  ],
  [
    "human",
    `Application:
{application}

Policy context:
{policyContext}

Known risk signals:
{riskSignals}`,
  ],
]);

const underwritingChain = RunnableSequence.from([
  async (input: unknown) => {
    const application = ApplicationSchema.parse(input);
    return {
      application: JSON.stringify(application),
      policyContext: JSON.stringify({
        maxClaimsCountForAutoApprove: 1,
        excludedCountries: ["IR", "KP"],
        requireRevenueForCommercialPolicies: true,
      }),
      riskSignals: JSON.stringify({
        claimsHistoryAvailable: true,
        kycStatus: "verified",
      }),
    };
  },
  prompt,
  model.withStructuredOutput(UnderwritingDecisionSchema),
]);

3) Add retrieval for underwriting rules

Underwriting decisions should be grounded in your actual appetite documents. Use a vector store or retriever so the agent cites current rules instead of relying on memory.

import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Document } from "@langchain/core/documents";

async function buildRetriever() {
  const docs = [
    new Document({
      pageContent:
        "Commercial policies require annual revenue above $250k unless approved by senior underwriter.",
    }),
    new Document({
      pageContent:
        "Applications from sanctioned countries must be declined immediately.",
    }),
    new Document({
      pageContent:
        "More than two prior claims in the last three years requires manual referral.",
    }),
  ];

  const store = await MemoryVectorStore.fromDocuments(
    docs,
    new OpenAIEmbeddings()
  );

  return store.asRetriever(3);
}

4) Execute the agent and persist an audit trail

The audit record matters as much as the decision. In insurance, you need traceability for regulators, disputes, and internal QA.

async function runUnderwriting(applicationInput: unknown) {
  const retriever = await buildRetriever();
  const relevantRules = await retriever.invoke(
    JSON.stringify(applicationInput)
  );

  const policyContext = relevantRules.map((d) => d.pageContent).join("\n");

  const result = await underwritingChain.invoke({
    ...applicationInput,
    policyContext,
    riskSignals: JSON.stringify({ sourceChecksPassed: true }),
  });

  await saveAuditRecord({
    input: applicationInput,
    policyContext,
    decision: result,
    timestamp: new Date().toISOString(),
  });

  return result;
}

async function saveAuditRecord(record: unknown) {
  console.log("AUDIT_RECORD", JSON.stringify(record));
}

Production Considerations

  • Keep data residency explicit
    • If you process EU policies in-region, pin your model endpoint, vector store, logs, and object storage to that region. Don’t ship PII across borders because your prompt pipeline is “just internal.”
  • Log every decision path
    • Store input payloads, retrieved policy snippets, tool outputs, final output, and model version. That gives you defensible auditability when a broker challenges a decline.
  • Add guardrails before auto-decisioning
    • Auto-approve only narrow low-risk cases. Everything else should route to a human underwriter with a clear reason code like missing_financials or policy_exclusion_match.
  • Monitor drift on appetite rules
    • When underwriting guidelines change, your retrieval index must refresh immediately. Stale rules create inconsistent decisions and compliance exposure.

Common Pitfalls

  • Letting the model free-form its answer

    If you accept plain text output, you will eventually get malformed decisions that cannot be stored or audited. Use withStructuredOutput, Zod validation, or both.

  • Mixing policy knowledge with applicant facts

    The model should never infer missing facts from policy text. Keep retrieved guidelines separate from application data so it can cite rules without fabricating values.

  • Skipping human review thresholds

    A lot of teams try to automate everything on day one. In insurance, high-value policies, regulated geographies, sanctions hits, or incomplete applications should always trigger referral.

If you want this to hold up in production, treat the agent as a decision support system first and an automation layer second. That’s how you get something underwriters trust instead of another chatbot sitting next to core systems.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides