How to Build a claims processing Agent Using AutoGen in TypeScript for investment banking
A claims processing agent in investment banking triages incoming claims, extracts the relevant facts from documents and emails, checks them against policy and trade records, and routes the case for human approval when needed. It matters because claims handling sits at the intersection of client service, regulatory exposure, auditability, and operational risk. If your agent gets this wrong, you do not just create bad automation — you create compliance problems.
Architecture
- •
Ingress layer
- •Receives claim emails, PDFs, scanned forms, and case metadata from internal systems.
- •Normalizes everything into a single claim context object.
- •
Extraction agent
- •Reads unstructured documents and pulls out claimant identity, trade references, dates, amounts, venue, and supporting evidence.
- •Produces structured JSON for downstream checks.
- •
Policy and controls agent
- •Validates the claim against business rules: eligibility windows, product coverage, KYC status, sanctions flags, and escalation thresholds.
- •Decides whether the claim can be auto-processed or must be escalated.
- •
Audit logger
- •Stores prompts, model outputs, tool calls, timestamps, and final decisions.
- •Required for internal audit and regulator review.
- •
Human review queue
- •Handles exceptions: ambiguous claims, missing documents, high-value cases, or anything touching restricted jurisdictions.
- •
Decision writer
- •Generates a final case note in bank-approved language for CRM or case management systems.
Implementation
- •
Install AutoGen for TypeScript and define your claim state
You want a small state object that carries the extracted facts through the workflow. Keep it explicit; banks hate hidden state.
npm install @autogenai/autogen openai zodimport { AssistantAgent } from "@autogenai/autogen"; import { OpenAIChatCompletionClient } from "@autogenai/autogen"; import { z } from "zod"; const ClaimSchema = z.object({ claimId: z.string(), clientName: z.string(), tradeId: z.string().optional(), instrument: z.string().optional(), claimedAmount: z.number(), currency: z.string(), eventDate: z.string(), jurisdiction: z.string(), documentsReceived: z.array(z.string()), }); type Claim = z.infer<typeof ClaimSchema>; const llmClient = new OpenAIChatCompletionClient({ model: "gpt-4o-mini", apiKey: process.env.OPENAI_API_KEY!, }); - •
Create an extraction agent that turns raw text into structured claim data
In AutoGen TypeScript,
AssistantAgentis the core building block. Use it to extract facts from emails or PDFs after OCR has already happened upstream.const extractionAgent = new AssistantAgent({ name: "claim_extractor", modelClient: llmClient, systemMessage: "You extract investment banking claims into strict JSON. Return only fields matching the schema.", }); async function extractClaim(rawText: string): Promise<Claim> { const result = await extractionAgent.run( `Extract a claim object from this text:\n\n${rawText}\n\nReturn JSON only.` ); const content = result.messages.at(-1)?.content ?? ""; const parsed = JSON.parse(content); return ClaimSchema.parse(parsed); } - •
Add a policy agent to decide auto-approval vs escalation
This is where investment banking requirements matter. The agent should not “reason freely”; it should evaluate against explicit controls like claim amount thresholds, jurisdiction restrictions, missing evidence, and KYC/sanctions flags from internal systems.
const policyAgent = new AssistantAgent({ name: "claim_policy_checker", modelClient: llmClient, systemMessage: "You assess claims against bank policy. Be conservative. If any control is unclear, escalate.", }); async function assessClaim(claim: Claim): Promise<"approve" | "escalate"> { const prompt = ` Assess this claim for auto-processing: ${JSON.stringify(claim)} Rules: - Escalate if claimedAmount > 50000 - Escalate if jurisdiction is on restricted list - Escalate if tradeId is missing for trade-related claims - Escalate if documentsReceived has fewer than 2 items - Otherwise approve Return exactly one word: approve or escalate `; const result = await policyAgent.run(prompt); const decision = (result.messages.at(-1)?.content ?? "").trim().toLowerCase(); return decision === "approve" ? "approve" : "escalate"; } - •
Wire extraction + policy into a single processing flow with audit logging
The production pattern is simple: extract first, validate second, then persist every step. Do not let the model write directly to your core systems without guardrails.
async function processClaim(rawText: string) { const extracted = await extractClaim(rawText); const decision = await assessClaim(extracted); const auditRecord = { timestamp: new Date().toISOString(), claimId: extracted.claimId, extracted, decision, model: "gpt-4o-mini", }; console.log("AUDIT", JSON.stringify(auditRecord)); if (decision === "approve") { return { status: "approved", note: `Claim ${extracted.claimId} approved for ${extracted.claimedAmount} ${extracted.currency}.`, }; } return { status: "escalated", note: `Claim ${extracted.claimId} requires manual review due to policy controls.`, }; }
Production Considerations
- •
Data residency
- •Keep claim data in-region if you operate across EMEA, APAC, or regulated US environments.
- •If your bank requires it, route only redacted text to the model and keep raw docs in your own storage layer.
- •
Auditability
- •Log prompt versions, output versions, tool inputs/outputs, and final decisions.
- •Store enough context to reconstruct why a claim was escalated six months later.
- •
Guardrails
- •Enforce schema validation with Zod before any downstream action.
- •Never allow direct settlement instructions from an LLM without deterministic checks and human approval for high-value cases.
- •
Monitoring
- •Track escalation rate, false approvals caught by ops teams, average handling time, and document-missing rates.
- •Alert when the model starts over-escalating; that usually means prompt drift or upstream OCR degradation.
Common Pitfalls
- •
Letting the agent infer policy instead of encoding it
Claims logic in banking must be explicit. Put thresholds and restricted-jurisdiction lists in code or config; use the model for extraction and classification support only.
- •
Skipping schema validation
LLM output is not trusted input. Parse with
JSON.parse, validate withClaimSchema.parse, then reject anything malformed before it reaches case management or payment workflows. - •
Ignoring human review paths
High-value claims will always need escalation rules. Build the manual queue on day one so operations can override decisions without breaking the workflow.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit