How to Build a claims processing Agent Using AutoGen in TypeScript for banking
A claims processing agent for banking takes a customer claim, extracts the relevant facts, checks policy rules and account data, asks for missing evidence, and drafts a decision packet for human review. It matters because claims are high-volume, regulated workflows where speed, consistency, auditability, and data handling rules all matter at once.
Architecture
- •
User-facing intake layer
- •Receives claim text, attachments, and metadata from a branch portal or case management system.
- •Normalizes inputs into a structured claim object.
- •
AutoGen orchestration layer
- •Uses
AssistantAgentto reason over the claim. - •Uses
UserProxyAgentto execute tool calls and enforce human-in-the-loop approval.
- •Uses
- •
Banking data tools
- •Fetches customer profile, policy eligibility, transaction history, and prior claim records.
- •Exposes only scoped functions with strong input validation.
- •
Decision and compliance layer
- •Applies bank rules for eligibility, fraud flags, KYC/AML checks, and escalation thresholds.
- •Produces an auditable rationale with source references.
- •
Audit logging layer
- •Stores prompts, tool calls, outputs, timestamps, and reviewer actions.
- •Keeps immutable records for compliance review.
- •
Redaction and residency controls
- •Removes PII before model calls when possible.
- •Routes data to approved regions only.
Implementation
1. Install AutoGen for TypeScript and define the claim shape
Use the TypeScript package that exposes AssistantAgent, UserProxyAgent, and OpenAIChatCompletionClient. Keep your claim payload explicit so you can validate it before any model call.
import { AssistantAgent, UserProxyAgent } from "@autogen/agentchat";
import { OpenAIChatCompletionClient } from "@autogen/openai";
import { z } from "zod";
const ClaimSchema = z.object({
claimId: z.string(),
customerId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
reason: z.string(),
documents: z.array(z.string()).default([]),
});
type Claim = z.infer<typeof ClaimSchema>;
const modelClient = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY!,
});
2. Register banking tools on the user proxy
The agent should not “guess” customer status. Put retrieval behind tools owned by the UserProxyAgent, then let the assistant request them through function calling. That gives you an audit point and lets you enforce residency or access policies in one place.
const userProxy = new UserProxyAgent({
name: "bank_user_proxy",
});
userProxy.registerFunction(
{
name: "getCustomerProfile",
description: "Fetch customer profile for claims review",
parameters: {
type: "object",
properties: {
customerId: { type: "string" },
},
required: ["customerId"],
},
functionMap: {
getCustomerProfile: async ({ customerId }: { customerId: string }) => {
// Replace with internal service call
return {
customerId,
kycStatus: "verified",
residencyRegion: "eu-west-1",
riskTier: "medium",
};
},
},
}
);
userProxy.registerFunction(
{
name: "getClaimHistory",
description: "Fetch prior claims for the customer",
parameters: {
type: "object",
properties: {
customerId: { type: "string" },
},
required: ["customerId"],
},
functionMap: {
getClaimHistory: async ({ customerId }: { customerId: string }) => {
return [
{ claimId: "CLM-1001", status: "approved", amount: 1200 },
{ claimId: "CLM-1044", status: "rejected", amount: 800 },
];
},
},
}
);
3. Create the claims assistant with a strict system prompt
Keep the assistant focused on extraction, policy application, and escalation. In banking, you want deterministic behavior around refusal boundaries and clear instructions to never invent missing facts.
const claimsAgent = new AssistantAgent({
name: "claims_processor",
modelClient,
systemMessage:
[
"You process banking claims.",
"Only use provided tools for customer data.",
"Do not infer missing facts.",
"If evidence is insufficient, request specific documents.",
"Always produce an audit-friendly summary with decision rationale.",
"Flag possible fraud, AML/KYC issues, or residency concerns.",
].join(" "),
});
4. Run the workflow and capture an auditable result
The pattern here is simple: validate input, send the structured task to the assistant, let it call tools through the proxy, then persist the final output alongside tool traces. For production banking systems, this final object should be stored in your case management database with immutable timestamps.
async function processClaim(rawClaim: unknown) {
const claim = ClaimSchema.parse(rawClaim);
const task = `
Review this banking claim and draft a decision recommendation.
Claim:
${JSON.stringify(claim, null, 2)}
Return:
- extracted facts
- required follow-up if any
- decision recommendation
- compliance notes
`;
const result = await claimsAgent.run(task, userProxy);
return {
claimId: claim.claimId,
outputText:
typeof result === "string"
? result
: JSON.stringify(result),
reviewedAt: new Date().toISOString(),
modelName: "gpt-4o-mini",
};
}
processClaim({
claimId: "CLM-2009",
customerId: "CUST-7781",
amount: 2500,
currency": "USD",
})
.then(console.log)
.catch(console.error);
Production Considerations
- •
Enforce human approval on adverse decisions
- •Never auto-close denied claims without reviewer sign-off.
- •Route low-confidence or high-value cases to operations staff.
- •
Log everything needed for audit
- •Store prompt input hashes, tool call arguments, tool outputs, final recommendations, and reviewer actions.
- •Keep retention aligned with your bank’s regulatory policy.
- •
Apply residency-aware routing
- •Keep EU claimant data in EU-approved infrastructure.
- •Block cross-region tool calls if they violate policy or contractual constraints.
- •
Add guardrails around sensitive outputs
- •Redact account numbers, IDs, addresses, and payment details before model calls where possible.
- •Use allowlisted tools only; no free-form network access from the agent runtime.
Common Pitfalls
- •
Letting the model decide without source data
- •Bad pattern: asking the agent to “figure out eligibility” from raw text alone.
- •Fix it by forcing retrieval through tools like
getCustomerProfileandgetClaimHistory.
- •
Skipping schema validation
- •Bad pattern: passing arbitrary JSON into the agent runtime.
- •Fix it with Zod or equivalent validation before any LLM call so malformed claims fail fast.
- •
Ignoring compliance boundaries
- •Bad pattern: sending full PII to the model when only partial context is needed.
- •Fix it by redacting unnecessary fields, restricting regions, and logging every decision path for audit review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit