How to Build a claims processing Agent Using LangChain in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
claims-processinglangchaintypescriptwealth-management

A claims processing agent in wealth management takes inbound claim requests, extracts the relevant facts, checks them against policy and account data, and routes the case for approval, rejection, or human review. It matters because claims are where money moves, so every decision needs to be fast, auditable, compliant, and defensible to clients, advisors, and regulators.

Architecture

Build this agent as a small workflow, not a single prompt.

  • Ingress layer

    • Receives claim payloads from a CRM, client portal, or operations queue.
    • Normalizes documents like PDFs, emails, and structured JSON into one internal schema.
  • LLM extraction layer

    • Uses LangChain to extract claim fields such as claimant identity, policy/account number, incident date, requested amount, and supporting evidence.
    • Produces structured output with schema validation so downstream logic does not rely on free text.
  • Policy and eligibility tool layer

    • Calls internal services for account status, KYC/AML flags, coverage rules, limits, and beneficiary data.
    • Keeps business logic outside the model.
  • Decision engine

    • Applies deterministic rules for approve / reject / escalate.
    • Forces human review when confidence is low or compliance checks fail.
  • Audit and observability layer

    • Logs inputs, outputs, tool calls, and final decisions with immutable identifiers.
    • Supports regulator-facing traceability and internal model risk reviews.
  • Data boundary controls

    • Enforces residency rules and redacts PII before sending anything to the model provider.
    • Prevents sensitive client data from leaving approved regions or tenants.

Implementation

1) Define the claim schema and LLM chain

Use a strict schema first. For wealth management workflows, you want typed extraction because free-form output creates audit problems fast.

import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StructuredOutputParser } from "langchain/output_parsers";

const ClaimSchema = z.object({
  claimantName: z.string(),
  accountId: z.string(),
  claimType: z.enum(["death_benefit", "transfer_error", "fraud", "fee_dispute"]),
  incidentDate: z.string(),
  amountRequested: z.number(),
  currency: z.string().default("USD"),
  supportingDocs: z.array(z.string()).default([]),
});

type Claim = z.infer<typeof ClaimSchema>;

const parser = StructuredOutputParser.fromZodSchema(ClaimSchema);

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "Extract claims data for wealth management operations. Return only valid structured output."],
  ["human", "{input}\n\n{format_instructions}"],
]);

const extractClaimChain = RunnableSequence.from([
  prompt,
  llm,
]);

StructuredOutputParser gives you a contract. In production I would pair this with retry logic and hard validation before any business action is taken.

2) Add tools for policy checks and account lookup

Keep sensitive checks in your own services. The model should decide what to call; your code should decide what the result means.

import { tool } from "@langchain/core/tools";
import { z as zod } from "zod";

const getAccountStatus = tool(
  async ({ accountId }: { accountId: string }) => {
    // Replace with internal service call
    return {
      accountId,
      status: "active",
      kycStatus: "verified",
      amlFlag: false,
      residency: "SG",
    };
  },
  {
    name: "get_account_status",
    description: "Fetch account compliance and status details",
    schema: zod.object({
      accountId: zod.string(),
    }),
  }
);

const evaluatePolicy = tool(
  async ({ claimType, amountRequested }: { claimType: string; amountRequested: number }) => {
    // Replace with internal policy engine
    const approvedLimit = claimType === "fee_dispute" ? 5000 : 25000;
    return {
      eligible: amountRequested <= approvedLimit,
      approvedLimit,
    };
  },
  {
    name: "evaluate_policy",
    description: "Check claim against product rules and limits",
    schema: zod.object({
      claimType: zod.string(),
      amountRequested: zod.number(),
    }),
  }
);

This is where wealth management differs from generic support automation. Eligibility depends on KYC state, jurisdiction, product terms, beneficiary constraints, and sometimes advisor authorization.

3) Orchestrate extraction + checks + decisioning

Use LangChain to extract the claim first, then run deterministic checks. If anything is off — residency mismatch, AML flag, unsupported claim type — escalate.

async function processClaim(rawInput: string) {
  const formattedPrompt = await prompt.format({
    input: rawInput,
    format_instructions: parser.getFormatInstructions(),
  });

const extractionResult = await llm.invoke(formattedPrompt);
const parsedClaim = await parser.parse(extractionResult.content as string) as Claim;

const [accountStatusResult] = await Promise.all([
    getAccountStatus.invoke({ accountId: parsedClaim.accountId }),
]);

const policyResult = await evaluatePolicy.invoke({
    claimType: parsedClaim.claimType,
    amountRequested: parsedClaim.amountRequested,
});

if (accountStatusResult.amlFlag || accountStatusResult.status !== "active") {
    return {
      decision: "escalate",
      reason: "account_risk_or_inactive",
      audit: { parsedClaim, accountStatusResult },
    };
}

if (!policyResult.eligible) {
    return {
      decision: "reject",
      reason: "policy_limit_exceeded",
      audit: { parsedClaim, policyResult },
    };
}

return {
    decision:
      parsedClaim.amountRequested > 10000 ? "human_review" : "approve",
    reason:
      parsedClaim.amountRequested > 10000 ? "high_value_claim" : "within_limits",
    audit: { parsedClaim, accountStatusResult, policyResult },
};
}

The important pattern here is that the model extracts; your application decides. Do not let the LLM directly approve payouts.

4) Add an agent wrapper when you need tool selection

If you want the model to choose between tools dynamically — for example when incoming claims vary widely — use createToolCallingAgent with AgentExecutor.

import { AgentExecutor } from "@langchain/core/agents";
import { createToolCallingAgent } from "langchain/agents";

const agentPrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a claims operations agent for wealth management. Use tools when needed. Never finalize payment decisions without policy checks."],
]);

const agent = await createToolCallingAgent({
  llm,
  tools: [getAccountStatus, evaluatePolicy],
});

const executor = new AgentExecutor({
  agent,
   tools:[getAccountStatus,evaluatePolicy],
});

const result = await executor.invoke({
   input:"Assess this fee dispute claim for account A123 requesting $1200.",
});

Use this pattern when the workflow is less predictable. For stable enterprise claims pipelines I still prefer explicit orchestration because it is easier to test and audit.

Production Considerations

  • Deployment

    • Run the agent inside your controlled VPC or private cluster.
    • Keep model endpoints region-bound to satisfy data residency requirements for client records.
  • Monitoring

    • Log every extracted field set, tool call, final decision, latency bucket, and escalation reason.
    • Track false positives on escalations and manual overrides by operations teams.
  • Guardrails


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides