How to Build a claims processing Agent Using LangGraph in TypeScript for fintech
A claims processing agent in fintech takes an incoming claim, validates the request against policy and transaction data, checks for fraud signals, routes edge cases to a human, and produces an auditable decision trail. That matters because claims are where money moves, compliance gets enforced, and bad automation turns into chargebacks, regulatory issues, and customer churn.
Architecture
A production claims agent in fintech needs these components:
- •
Ingress layer
- •Receives claim payloads from API, queue, or workflow trigger
- •Normalizes fields like claimant ID, account ID, amount, merchant, timestamps
- •
Policy and eligibility node
- •Checks product rules, coverage windows, limits, exclusions, and jurisdiction constraints
- •Uses deterministic logic before any model call
- •
Risk and fraud scoring node
- •Pulls transaction history, velocity signals, device metadata, and prior disputes
- •Produces a score plus reasons for auditability
- •
Decision router
- •Chooses between approve, reject, request-more-info, or escalate-to-human
- •Keeps low-risk cases automated and high-risk cases controlled
- •
Audit trail store
- •Persists every state transition, model output, tool call, and final decision
- •Required for compliance reviews and post-incident analysis
- •
Human review handoff
- •Sends ambiguous or regulated cases to ops with full context
- •Prevents the agent from making unsupported decisions
Implementation
1. Define typed state for the claim workflow
Use a typed graph state so every node reads and writes predictable fields. In fintech workflows this is non-negotiable because you need traceability across policy checks, risk checks, and final decisions.
import { StateGraph, START, END } from "@langchain/langgraph";
import { Annotation } from "@langchain/langgraph";
type ClaimStatus = "pending" | "approved" | "rejected" | "needs_review";
type ClaimState = {
claimId: string;
customerId: string;
amount: number;
merchantId: string;
jurisdiction: string;
status: ClaimStatus;
policyEligible?: boolean;
fraudScore?: number;
decisionReason?: string;
};
const ClaimAnnotation = Annotation.Root({
claimId: Annotation<string>(),
customerId: Annotation<string>(),
amount: Annotation<number>(),
merchantId: Annotation<string>(),
jurisdiction: Annotation<string>(),
status: Annotation<ClaimStatus>({
default: () => "pending",
reducer: (_, next) => next,
}),
policyEligible: Annotation<boolean | undefined>(),
fraudScore: Annotation<number | undefined>(),
decisionReason: Annotation<string | undefined>(),
});
2. Add deterministic nodes before any model-driven step
Keep policy checks pure. For fintech claims you want hard rules first so the agent does not “reason” around eligibility constraints.
const policyCheck = async (state: typeof ClaimAnnotation.State) => {
const eligible =
state.amount <= 5000 &&
state.jurisdiction === "US" &&
!["merchant_blacklist_1", "merchant_blacklist_2"].includes(state.merchantId);
return {
policyEligible: eligible,
decisionReason: eligible ? "Policy passed" : "Policy failed",
status: eligible ? "pending" : "rejected",
};
};
const fraudCheck = async (state: typeof ClaimAnnotation.State) => {
// Replace with a real scoring service call.
const score = state.amount > 2000 ? 82 : Math.floor(Math.random() * 40);
return {
fraudScore: score,
decisionReason:
score >= 80 ? "High fraud risk" : "Fraud risk acceptable",
status: score >= 80 ? "needs_review" : "pending",
};
};
3. Route decisions with addConditionalEdges
This is the actual LangGraph pattern you want. The graph decides whether to approve automatically or hand off for review based on the current state.
const decideNext = (state: typeof ClaimAnnotation.State) => {
if (state.status === "rejected") return END;
if (state.status === "needs_review") return "humanReview";
if (state.policyEligible && (state.fraudScore ?? 0) < threshold) return "approve";
};
const threshold = parseInt(process.env.FRAUD_THRESHOLD ?? "75", 10);
const approve = async (state: typeof ClaimAnnotation.State) => {
return {
status: "approved" as const,
decisionReason: `Auto-approved under threshold ${threshold}`,
};
};
const humanReview = async (state: typeof ClaimAnnotation.State) => {
return {
status: "needs_review" as const,
decisionReason:
`Escalated for manual review due to fraud score ${state.fraudScore}`,
};
};
Build the graph and compile it
StateGraph, START, END, addNode, addEdge, addConditionalEdges, and compile() are the core APIs here.
const graph = new StateGraph(ClaimAnnotation)
.addNode("policyCheck", policyCheck)
.addNode("fraudCheck", fraudCheck)
.addNode("approve", approve)
.addNode("humanReview", humanReview)
.addEdge(START, "policyCheck")
.addEdge("policyCheck", "fraudCheck")
.addConditionalEdges("fraudCheck", decideNext)
.addEdge("approve", END)
.addEdge("humanReview", END);
export const claimAgent = graph.compile();
Run the agent with real claim input
Use invoke() to execute one claim at a time. In production you would wrap this behind an API endpoint or queue consumer.
async function main() {
const result = await claimAgent.invoke({
claimId: "clm_10001",
customerId: "cus_9001",
amount: 2450,
merchantId: "merchant_ok_9",
jurisdiction: "US",
});
console.log(result);
}
main().catch(console.error);
Production Considerations
- •Audit logging
Persist every input and output from each node with timestamps, correlation IDs, and operator IDs. Fintech teams need a full evidence chain for disputes and regulator requests.
- •Data residency
Keep customer PII and transaction data in-region. If your deployment spans multiple clouds or geographies, make sure the graph runtime never ships regulated data across borders.
- •Guardrails on model use
Do not let an LLM make final eligibility decisions without deterministic checks. Use models only for extraction, summarization of evidence, or drafting reviewer notes.
- •Monitoring
Track approval rate drift, manual-review rate, false positive fraud flags, latency per node, and rejected-by-policy counts. A spike in any of these usually means upstream data changed or a rule regressed.
Common Pitfalls
- •
Putting LLM reasoning before hard rules
If you let a model decide eligibility first, you will get inconsistent outcomes. Always run policy validation before any probabilistic step. - •
Not persisting intermediate state
If a workflow fails after fraud scoring but before final routing, you need the intermediate values for replay. Store node outputs in your event log or database. - •
Ignoring reviewer handoff context
Sending “needs review” without reasons forces analysts to re-open systems and reconstruct evidence manually. Include policy results, scores, thresholds used, and source transaction IDs in the handoff payload. - •
Skipping region-aware controls
Claims often contain PII tied to legal residency requirements. Enforce regional storage and processing rules at the ingress layer before the graph runs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit