How to Build a fraud detection Agent Using LangGraph in TypeScript for wealth management
A fraud detection agent for wealth management watches transaction flow, account behavior, and advisor activity to spot suspicious patterns before money moves or client trust is damaged. In this domain, the bar is higher than generic fraud detection: every decision needs auditability, low false positives, and controls that respect compliance, data residency, and client confidentiality.
Architecture
- •
State model
- •Holds the case payload, risk signals, investigation notes, and final disposition.
- •Keep it typed so every node in the graph works against the same contract.
- •
Signal ingestion node
- •Pulls transaction metadata, account history, device context, advisor interactions, and watchlist hits.
- •Normalizes inputs into a single case object.
- •
Rules and scoring node
- •Applies deterministic checks first: velocity spikes, beneficiary changes, unusual geographies, large withdrawals after profile changes.
- •Produces a structured risk score and reasons.
- •
LLM investigation node
- •Summarizes evidence for an analyst or generates a case narrative.
- •Use it for explanation and triage, not for final fraud decisions.
- •
Escalation / routing node
- •Sends high-risk cases to human review.
- •Routes low-risk cases to auto-close or monitoring.
- •
Audit sink
- •Persists every node output, score change, and decision reason.
- •Required for model governance and regulatory review.
Implementation
1) Define the graph state and helper functions
Use Annotation.Root to define typed state. In wealth management, keep PII minimal in graph state; store references when possible and resolve sensitive data only inside controlled nodes.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
type FraudSignal = {
type: string;
severity: "low" | "medium" | "high";
reason: string;
};
const FraudState = Annotation.Root({
caseId: Annotation<string>(),
customerId: Annotation<string>(),
transactionAmount: Annotation<number>(),
currency: Annotation<string>(),
jurisdiction: Annotation<string>(),
signals: Annotation<FraudSignal[]>({
default: () => [],
reducer: (left, right) => [...left, ...right],
}),
riskScore: Annotation<number>({
default: () => 0,
reducer: (_, right) => right,
}),
decision: Annotation<"review" | "hold" | "close">({
default: () => "close",
reducer: (_, right) => right,
}),
auditTrail: Annotation<string[]>({
default: () => [],
reducer: (left, right) => [...left, ...right],
}),
});
function addSignal(type: string, severity: FraudSignal["severity"], reason: string): FraudSignal {
return { type, severity, reason };
}
2) Build deterministic detection nodes first
For wealth management fraud, rules catch more than they miss when they’re tuned around known abuse patterns like account takeover, unauthorized wire transfers, and suspicious advisor behavior.
const ingestAndScore = async (state: typeof FraudState.State) => {
const signals = [...state.signals];
let score = state.riskScore;
if (state.transactionAmount >= 100000) {
signals.push(addSignal("large_transfer", "medium", "Transfer exceeds high-value threshold"));
score += 25;
}
if (state.jurisdiction !== "US") {
signals.push(addSignal("cross_border", "medium", "Transaction originates outside primary jurisdiction"));
score += 15;
}
if (state.transactionAmount >= 500000) {
signals.push(addSignal("extreme_value", "high", "High-value movement requires manual review"));
score += 35;
}
return {
signals,
riskScore: Math.min(score, 100),
auditTrail: [`ingestAndScore:${state.caseId}`],
decision:
score >= 60 ? ("review" as const) :
score >= 35 ? ("hold" as const) :
("close" as const),
};
};
###3) Add an investigation node for analyst-ready summaries
If you use an LLM here, keep the prompt constrained. The output should explain why the case was flagged and what evidence matters; it should not invent facts or override policy.
const summarizeCase = async (state: typeof FraudState.State) => {
return {
auditTrail: [`summarizeCase:${state.caseId}`],
signals: state.signals,
};
};
If you want a real LLM step later with LangGraph/LangChain models, plug it into this node and return only structured text plus citations to internal evidence IDs. That keeps the graph deterministic at the edges where compliance cares most.
###4) Route by risk using addConditionalEdges and compile the graph
This is where LangGraph earns its keep. You separate scoring from routing so policy can change without rewriting your detection logic.
const workflow = new StateGraph(FraudState)
.addNode("ingestAndScore", ingestAndScore)
.addNode("summarizeCase", summarizeCase)
.addEdge(START, "ingestAndScore")
.addConditionalEdges("ingestAndScore", (state) => {
if (state.riskScore >= state.decision === "review") return "summarizeCase";
return END;
})
.addEdge("summarizeCase", END);
export const fraudAgent = workflow.compile();
A better production version is to route on a dedicated function instead of overloading decision logic inside the condition. The point is the same: use the graph to express policy flow explicitly.
Production Considerations
- •
Auditability
- •Persist every state transition with timestamps and immutable case IDs.
- •Wealth managers need traceability for internal compliance teams and external regulators.
- •
Data residency
- •Keep client data in-region.
- •If you run multi-region deployments for latency or resilience, ensure sensitive fields never cross jurisdictions without approved controls.
- •
Human-in-the-loop controls
- •
Escalate all high-value or cross-border cases to an analyst queue.
- •
Require manual approval before holds or freezes are applied.
- •
Log reviewer identity and reason codes for every override.
- •
Monitoring
- •
Track false positive rate by product line and client segment.
- •
Monitor drift on thresholds like transfer size distribution and beneficiary-change frequency.
- •
Alert on missing evidence sources; silent failures are dangerous in regulated workflows.
Common Pitfalls
- •
Using the LLM as the primary fraud detector
- •Don’t let natural language generation decide whether a transfer is fraudulent.
- •Use deterministic rules and scoring for the decision path; use the model for explanation and summarization only.
- •
Storing raw sensitive data in graph state
- •Avoid putting full account numbers, unmasked PII, or full trade details in every node payload.
- •Store references or redacted values in state and fetch sensitive records only inside controlled services.
- •
Skipping explainability fields
- •If your output is just
riskScore, you’ll fail operationally even if detection works. - •Always emit
signals,reason codes,auditTrail, and a finaldecisionso compliance can reconstruct why a case was flagged.
- •If your output is just
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit