How to Build a fraud detection Agent Using LangGraph in TypeScript for pension funds
A fraud detection agent for pension funds triages suspicious activity, enriches it with policy and account context, and decides whether to block, escalate, or request human review. That matters because pension operations are high-trust, regulated, and long-lived: false negatives create loss and compliance exposure, while false positives frustrate retirees and ops teams.
Architecture
- •
Event intake
- •Receives payment requests, beneficiary changes, address updates, login anomalies, and transfer instructions.
- •Normalizes events into a single schema before they hit the graph.
- •
Risk enrichment node
- •Pulls member history, account age, contribution patterns, device metadata, IP reputation, and prior disputes.
- •Adds pension-specific signals like recent beneficiary edits or bank-account changes.
- •
Policy evaluation node
- •Applies rules for regulatory thresholds, segregation of duties, and mandatory review cases.
- •Flags events that require audit logging or jurisdiction-specific handling.
- •
LLM reasoning node
- •Summarizes why the event is suspicious in plain language.
- •Produces a structured recommendation:
approve,hold,escalate, orreject.
- •
Human review handoff
- •Sends high-risk cases to an investigator queue with evidence attached.
- •Keeps the final decision traceable for audit.
- •
Audit trail store
- •Persists every state transition, tool call, and decision output.
- •Supports later reconstruction for compliance reviews and internal investigations.
Implementation
1) Define the state and graph shape
Use a typed state object so every node reads and writes the same contract. For fraud workflows, keep raw inputs separate from derived risk fields so your audit trail stays clean.
import { StateGraph, START, END } from "@langchain/langgraph";
import { Annotation } from "@langchain/langgraph";
type FraudEvent = {
eventId: string;
memberId: string;
type: "beneficiary_change" | "bank_update" | "withdrawal" | "login";
amount?: number;
country?: string;
};
const FraudState = Annotation.Root({
event: Annotation<FraudEvent>(),
memberAgeDays: Annotation<number>(),
recentBankChangeDays: Annotation<number | null>(),
deviceRiskScore: Annotation<number>(),
policyFlags: Annotation<string[]>(),
riskScore: Annotation<number>(),
recommendation: Annotation<"approve" | "hold" | "escalate" | "reject">(),
rationale: Annotation<string>(),
});
type FraudStateType = typeof FraudState.State;
2) Add deterministic enrichment and policy checks
Keep these nodes deterministic. In regulated environments, you want repeatable outputs for the same inputs when auditors ask why a case was escalated.
const enrichMemberContext = async (state: FraudStateType) => {
const event = state.event;
// Replace with real data access layer calls
const memberAgeDays = event.type === "withdrawal" ? 900 : 120;
const recentBankChangeDays = event.type === "bank_update" ? 0 : null;
const deviceRiskScore = event.country && event.country !== "ZA" ? 70 : 20;
return { memberAgeDays, recentBankChangeDays, deviceRiskScore };
};
const applyPolicyChecks = async (state: FraudStateType) => {
const flags: string[] = [];
if (state.event.type === "beneficiary_change") flags.push("beneficiary_change_requires_review");
if (state.recentBankChangeDays !== null && state.recentBankChangeDays <= 7) {
flags.push("recent_bank_change");
}
if ((state.event.amount ?? 0) > 250000) flags.push("high_value_transaction");
return { policyFlags: flags };
};
3) Add an LLM decision node with structured output
Use the model only after you’ve done hard rules. The model should explain and classify; it should not be the first line of defense.
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
});
const llmDecision = async (state: FraudStateType) => {
const prompt = `
You are reviewing a pension fund fraud case.
Event type: ${state.event.type}
Amount: ${state.event.amount ?? "n/a"}
Member age days: ${state.memberAgeDays}
Recent bank change days: ${state.recentBankChangeDays ?? "n/a"}
Device risk score: ${state.deviceRiskScore}
Policy flags: ${state.policyFlags.join(", ") || "none"}
Return JSON with:
recommendation one of approve|hold|escalate|reject
rationale short explanation
riskScore integer from 0 to 100
`;
const response = await llm.invoke(prompt);
const text = response.content.toString();
// In production parse with zod/json schema validation
const parsed = JSON.parse(text);
return {
recommendation: parsed.recommendation,
rationale: parsed.rationale,
riskScore: parsed.riskScore,
};
};
4) Wire routing logic and compile the graph
This is where LangGraph earns its keep. You can branch on policy before the model runs, then send only ambiguous cases to LLM reasoning.
const routeCase = (state: FraudStateType) => {
if (state.policyFlags.includes("high_value_transaction")) return "llmDecision";
if (state.policyFlags.includes("recent_bank_change")) return "llmDecision";
if (state.event.type === "login") return END;
return "llmDecision";
};
const graph = new StateGraph(FraudState)
.addNode("enrichMemberContext", enrichMemberContext)
.addNode("applyPolicyChecks", applyPolicyChecks)
.addNode("llmDecision", llmDecision)
.addEdge(START, "enrichMemberContext")
.addEdge("enrichMemberContext", "applyPolicyChecks")
.addConditionalEdges("applyPolicyChecks", routeCase, {
llmDecision: "llmDecision",
[END]: END,
})
.addEdge("llmDecision", END);
export const fraudDetectionApp = graph.compile();
Then invoke it with a real event:
const result = await fraudDetectionApp.invoke({
event: {
eventId: "evt_123",
memberId: "mem_456",
type: "beneficiary_change",
amount: undefined,
country: "ZA",
},
});
console.log(result.recommendation);
console.log(result.rationale);
Production Considerations
- •
Data residency
- •
Keep member PII in-region. If your pension fund operates across jurisdictions, route events to a region-specific deployment and avoid sending raw identifiers to external model providers unless your legal team has approved it.
- •
Auditability
- •
Persist every input state, intermediate node output, final recommendation, and model prompt/response. For pensions this is not optional; investigators need to reconstruct why a payment was held six months later.
- •
Guardrails
- •
Hard-block certain actions with deterministic rules before any LLM call. Examples include sudden beneficiary changes followed by high-value withdrawals or bank-account edits from new devices.
- •
Monitoring
- •
Track false positive rate by transaction type, manual override rate by investigator team, and latency per node. Pension operations have batch windows and cutoff times; slow decisions can delay legitimate retiree payments.
Common Pitfalls
- •
Letting the LLM make first-pass decisions
- •Don’t do this. Put deterministic policy checks ahead of the model so obvious violations never depend on prompt quality.
- •
Mixing PII into free-form prompts without controls
- •Strip unnecessary identifiers before calling the model. Use tokenized IDs in prompts and resolve them back inside your secure application layer.
- •
Skipping structured validation on model output
- •Never trust raw text. Parse into a schema and reject malformed outputs before they reach downstream systems or investigator queues.
- •
Treating all suspicious events the same
- •A login anomaly is not a beneficiary-change fraud case. Build separate branches for payment fraud, identity takeover, account mutation abuse, and withdrawal abuse so your thresholds match pension risk patterns.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit