How to Build a claims processing Agent Using LangChain in TypeScript for retail banking

By Cyprian AaronsUpdated 2026-04-21
claims-processinglangchaintypescriptretail-banking

A claims processing agent for retail banking takes an incoming customer claim, classifies it, extracts the needed details, checks policy and account context, routes it to the right workflow, and drafts a response or next action. It matters because claims are high-friction, high-risk interactions: if you get the triage wrong, you create compliance issues, slow resolution, and unnecessary manual work.

Architecture

  • Ingress layer

    • Receives claim requests from web, mobile, branch CRM, or back-office queues.
    • Normalizes payloads into a single internal schema.
  • LLM orchestration layer

    • Uses LangChain to classify intent, extract entities, and decide next steps.
    • Keeps prompts narrow and task-specific.
  • Banking context tools

    • Fetches customer profile, account status, transaction history, and product rules.
    • Must be wrapped with strict authorization checks.
  • Policy and compliance guardrails

    • Enforces KYC/AML flags, dispute windows, chargeback rules, and escalation thresholds.
    • Blocks unsupported actions and forces human review where required.
  • Audit and trace store

    • Persists inputs, model outputs, tool calls, and final decisions.
    • Needed for internal audit, regulator review, and incident analysis.
  • Case management integration

    • Creates or updates a case in the bank’s workflow system.
    • Assigns SLA, queue, and reviewer based on claim type and risk.

Implementation

  1. Install LangChain and define your claim schema

Use a structured input/output contract first. For banking workflows, don’t let the model free-write its way through core decisions.

npm install langchain @langchain/openai zod
import { z } from "zod";

export const ClaimInputSchema = z.object({
  customerId: z.string(),
  accountId: z.string(),
  claimType: z.enum(["card_dispute", "unauthorized_transfer", "fee_refund", "cash_withdrawal"]),
  description: z.string(),
  amount: z.number().positive(),
  currency: z.string().length(3),
});

export const ClaimDecisionSchema = z.object({
  category: z.enum(["auto_approve", "needs_review", "reject"]),
  reason: z.string(),
  requiredDocs: z.array(z.string()),
});
  1. Build a classifier chain with ChatOpenAI and withStructuredOutput

This is the main pattern: ask the model to classify the claim into a bounded decision object. withStructuredOutput keeps the output machine-readable.

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { ClaimInputSchema, ClaimDecisionSchema } from "./schemas";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = PromptTemplate.fromTemplate(`
You are a retail banking claims triage assistant.
Classify the claim using bank policy:
- auto_approve only for low-risk fee refunds under policy threshold
- needs_review for disputes or missing evidence
- reject only if clearly out of scope

Claim:
Customer ID: {customerId}
Account ID: {accountId}
Type: {claimType}
Amount: {amount} {currency}
Description: {description}
`);

export async function classifyClaim(input: unknown) {
  const claim = ClaimInputSchema.parse(input);

  const chain = prompt.pipe(llm.withStructuredOutput(ClaimDecisionSchema));
  return await chain.invoke(claim);
}
  1. Add banking context retrieval as tools

The agent should not guess account state. Pull facts from internal systems through explicit functions. In LangChain TypeScript, DynamicStructuredTool is a clean fit for controlled tool use.

import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";

export const getAccountContextTool = new DynamicStructuredTool({
  name: "get_account_context",
  description: "Fetches account status and recent transaction summary for a retail banking claim.",
  schema: z.object({
    customerId: z.string(),
    accountId: z.string(),
  }),
  func: async ({ customerId, accountId }) => {
    // Replace with real service call behind mTLS + authz checks
    return JSON.stringify({
      customerId,
      accountId,
      status: "active",
      recentFlags: ["none"],
      lastTransactionAt: "2026-04-20T10:15:00Z",
    });
  },
});

Then wire it into an agent that can decide whether more context is needed before making a recommendation.

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createToolCallingAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";

const agentPrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a claims processing assistant for retail banking. Use tools when needed. Never invent account facts."],
  ["human", "{input}"],
]);

const agent = await createToolCallingAgent({
  llm,
  tools: [getAccountContextTool],
  prompt: agentPrompt,
});

const executor = new AgentExecutor({
  agent,
  tools: [getAccountContextTool],
});

const result = await executor.invoke({
  input:
    "Review this unauthorized transfer claim for customer C123 on account A456. Amount is USD 250.",
});
console.log(result.output);
  1. Route outcomes into case management with audit logging

The final step is not “answer the user.” It is “create an auditable bank action.” Persist every decision with timestamps, model version, prompt version, tool outputs, and reviewer routing.

type AuditRecord = {
  requestId: string;
  customerId: string;
  decision: string;
  reason: string;
};

async function persistAudit(record: AuditRecord) {
    console.log(JSON.stringify(record));
}

export async function processClaim(input: unknown) {
  const claim = ClaimInputSchema.parse(input);
  
  const decision = await classifyClaim(claim);

  await persistAudit({
    requestId: crypto.randomUUID(),
    customerId: claim.customerId,
    decision.category,
    reason:
      decision.reason,
    });

}

Production Considerations

  • Deploy in-region

Run model calls and audit storage in approved regions only. Retail banking data residency requirements usually mean you cannot send PII to arbitrary endpoints or cross-border services without legal review.

  • Log everything needed for audit

Store prompt version, model name, tool call inputs/outputs, final classification, human override reason, and timestamps. If an investigator asks why a claim was auto-routed or rejected, you need traceability.

  • Add hard guardrails before any write action

The agent can recommend; it should not directly approve payouts or close disputes without policy checks. Put deterministic rules in front of execution for thresholds like amount limits, fraud flags, dormant accounts, or sanctions hits.

  • Monitor drift by claim type

Track auto-approval rate, escalation rate, manual reversal rate, and false rejects per product line. A spike in one category usually means either prompt drift or upstream data quality issues.

Common Pitfalls

  1. Letting the model infer missing banking facts

    • Mistake: asking the LLM to decide based on vague descriptions alone.
    • Fix: force tool lookups for account status, transaction history, dispute windows, and product rules before classification.
  2. Skipping structured outputs

    • Mistake: parsing free-form text like “Approved because it looks valid.”
    • Fix: use withStructuredOutput plus Zod schemas so downstream systems receive stable fields like category, reason, and requiredDocs.
  3. Treating compliance as prompt text only

    • Mistake: putting policy instructions in the system prompt and assuming that is enough.
    • Fix:
      • enforce deterministic policy checks outside the LLM,
      • block unsupported actions in code,
      • keep an immutable audit trail for every decision path.
  4. Ignoring data residency and PII handling

    • Mistake: sending full statements or raw identifiers to external services by default.
    • Fix:
      • redact sensitive fields before prompts,
      • minimize payloads,
      • pin execution to approved regions,
      • separate audit storage from model runtime where required.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides