How to Build a fraud detection Agent Using LangChain in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionlangchaintypescriptwealth-management

A fraud detection agent for wealth management watches client activity, flags suspicious patterns, and explains why a case needs review. It matters because the cost of a missed alert is not just financial loss; it can trigger compliance breaches, client harm, and audit findings that are expensive to unwind.

Architecture

Build this agent as a small set of deterministic components around an LLM, not as a free-form chatbot.

  • Event ingestion layer

    • Pulls transaction events, account changes, beneficiary updates, login anomalies, and advisor actions.
    • Normalizes records into a consistent schema before they hit the agent.
  • Risk rules engine

    • Applies hard controls first: sanctions hits, unusual wire amounts, rapid beneficiary changes, failed MFA spikes.
    • Keeps obvious cases out of the model path.
  • LangChain classifier agent

    • Uses ChatOpenAI with structured output to classify risk and produce a rationale.
    • Returns JSON only so downstream systems can route cases reliably.
  • Evidence retrieval

    • Pulls prior alerts, KYC notes, account profile data, and recent activity.
    • Gives the model context without dumping raw internal systems into prompts.
  • Case management output

    • Writes the decision to a queue or ticketing system for human review.
    • Stores traceable evidence for audit and model governance.
  • Audit and observability layer

    • Logs prompt inputs, model outputs, rule hits, latency, and final disposition.
    • Supports compliance review and post-incident analysis.

Implementation

1) Define the event schema and the risk output

Keep the input narrow. Wealth management teams need explainability and auditability more than clever prompts.

import { z } from "zod";

export const FraudEventSchema = z.object({
  clientId: z.string(),
  accountId: z.string(),
  eventType: z.enum([
    "wire_transfer",
    "beneficiary_change",
    "login_anomaly",
    "address_change",
    "advisor_override"
  ]),
  amountUsd: z.number().nonnegative().optional(),
  country: z.string().optional(),
  deviceRiskScore: z.number().min(0).max(100).optional(),
  priorAlerts30d: z.number().int().nonnegative(),
  kycRiskTier: z.enum(["low", "medium", "high"]),
  timestamp: z.string()
});

export const FraudDecisionSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  shouldEscalate: z.boolean(),
  reasons: z.array(z.string()).min(2),
  recommendedAction: z.enum([
    "allow",
    "step_up_auth",
    "hold_for_review",
    "freeze_and_escalate"
  ])
});

export type FraudEvent = z.infer<typeof FraudEventSchema>;
export type FraudDecision = z.infer<typeof FraudDecisionSchema>;

2) Build the LangChain chain with structured output

Use ChatOpenAI plus withStructuredOutput. This keeps the model on rails and makes parsing reliable in production.

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { FraudEventSchema, FraudDecisionSchema } from "./schemas";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const prompt = PromptTemplate.fromTemplate(`
You are a fraud detection analyst for a wealth management firm.

Assess whether this event indicates fraud or account takeover.
Return only structured output matching the schema.

Client event:
{eventJson}

Rules:
- Prioritize account takeover signals on login anomalies + beneficiary changes.
- Treat large wires to high-risk jurisdictions as higher risk.
- Be conservative when KYC risk tier is high.
- If evidence is weak, recommend human review instead of escalation.
`);

const fraudAgent = llm.withStructuredOutput(FraudDecisionSchema);

export async function assessFraud(rawEvent: unknown) {
  const event = FraudEventSchema.parse(rawEvent);

  const input = await prompt.format({
    eventJson: JSON.stringify(event),
  });

  const decision = await fraudAgent.invoke(input);
  return decision;
}

3) Add deterministic pre-checks before the model

Do not send every event to the LLM. Use rules to short-circuit obvious cases and reduce cost.

export function applyHardRules(event: FraudEvent) {
  if (event.eventType === "beneficiary_change" && event.priorAlerts30d > 2) {
    return {
      riskLevel: "high" as const,
      shouldEscalate: true,
      reasons: ["Repeated alerts within 30 days", "Beneficiary change is high-risk"],
      recommendedAction: "hold_for_review" as const,
    };
  }

  if (
    event.eventType === "wire_transfer" &&
    (event.amountUsd ?? 0) > 250000 &&
    event.kycRiskTier === "high"
  ) {
    return {
      riskLevel: "high" as const,
      shouldEscalate: true,
      reasons: ["Large wire above threshold", "High KYC risk tier"],
      recommendedAction: "freeze_and_escalate" as const,
    };
  }

	return null;
}

export async function detectFraud(rawEvent: unknown) {
  	const event = FraudEventSchema.parse(rawEvent);
  	const hardRuleDecision = applyHardRules(event);

  	if (hardRuleDecision) return hardRuleDecision;

  	return assessFraud(event);
}

4) Wire it into an API handler with audit logging

Persist both inputs and outputs. In wealth management, you need a defensible trail for compliance and internal review.

import express from "express";
import { detectFraud } from "./fraud-agent";

const app = express();
app.use(express.json());

app.post("/fraud/check", async (req, res) => {
	const startedAt = Date.now();

	try {
		const decision = await detectFraud(req.body);

		console.log(JSON.stringify({
			eventType: req.body.eventType,
			clientId: req.body.clientId,
			riskLevel: decision.riskLevel,
			shouldEscalate: decision.shouldEscalate,
			latencyMs: Date.now() - startedAt
		}));

		res.json(decision);
	} catch (error) {
		console.error("fraud_check_failed", error);
		res.status(400).json({ error: "Invalid event or processing failure" });
	}
});

app.listen(3000);

Production Considerations

  • Data residency

    • Keep client PII and portfolio data in-region if your jurisdiction requires it.
    • If you use hosted LLMs, confirm where prompts are processed and whether logs are retained outside approved regions.
  • Compliance controls

    • Store prompt versions, rule versions, model version, and final disposition together.
    • That gives compliance teams a full chain of evidence during reviews or regulator requests.
  • Monitoring

    • Track false positives by advisor team, product line, jurisdiction, and client segment.
    • A fraud agent that over-flags UHNW clients will create alert fatigue fast.
  • Guardrails

    • Never let the model directly freeze accounts or block transfers without policy checks. Route only to human review or predefined workflow states unless a hard rule triggers action.

Common Pitfalls

  • Sending raw sensitive data into prompts

    Don’t pass full statements or unredacted notes unless you have explicit controls. Redact account numbers, tax IDs, addresses, and free-text advisor notes before they reach LangChain.

  • Using the LLM as the first line of defense

    Hard rules should catch obvious fraud patterns before inference. If you rely on the model alone, you will get inconsistent decisions and higher operational cost.

  • Skipping audit metadata

    A decision without context is useless in wealth management. Log input hashes, prompt version, rule hits, model name, timestamp, and reviewer outcome so every alert can be reconstructed later.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides