How to Build a fraud detection Agent Using LangChain in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionlangchaintypescriptbanking

A fraud detection agent in banking is not a chatbot that guesses whether a transaction is bad. It is an orchestration layer that pulls transaction context, evaluates risk signals, calls deterministic checks, and produces an auditable decision or escalation path. That matters because banks need fast detection, explainable outcomes, and controls that survive compliance review.

Architecture

  • Transaction ingestion layer

    • Receives card payments, ACH transfers, wire requests, login events, or beneficiary changes.
    • Normalizes payloads into a single risk schema.
  • Risk signal retrieval

    • Pulls customer profile data, velocity rules, device fingerprinting, geolocation mismatch, and historical fraud labels.
    • Usually backed by internal APIs or feature stores.
  • LangChain agent

    • Coordinates tool calls and decides whether to approve, hold, step-up verify, or escalate.
    • Uses ChatOpenAI plus tools for deterministic checks.
  • Policy and compliance guardrail

    • Enforces bank-specific rules like KYC status, sanctions screening flags, residency constraints, and audit logging.
    • Prevents the model from making unsupported final decisions.
  • Decision store and audit trail

    • Persists every input signal, tool output, model response, and final action.
    • Required for investigations and regulator review.

Implementation

1) Install dependencies and define the risk schema

Use LangChain JS packages directly. Keep the agent input strict so you do not pass raw customer blobs into the model.

npm install langchain @langchain/openai zod
import { z } from "zod";

export const FraudEventSchema = z.object({
  transactionId: z.string(),
  customerId: z.string(),
  amount: z.number().positive(),
  currency: z.string().length(3),
  merchantCountry: z.string().length(2),
  customerCountry: z.string().length(2),
  ipCountry: z.string().length(2),
  deviceTrusted: z.boolean(),
  kycStatus: z.enum(["verified", "pending", "failed"]),
});

export type FraudEvent = z.infer<typeof FraudEventSchema>;

This schema is doing real work. It keeps the agent focused on fields that matter for fraud scoring and avoids leaking unnecessary personal data into prompts.

2) Create deterministic tools for bank controls

Do not ask the model to infer sanctions hits or velocity limits from prose. Wrap your internal systems as tools using LangChain’s DynamicStructuredTool.

import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";

const velocityCheckTool = new DynamicStructuredTool({
  name: "velocity_check",
  description: "Checks recent transaction velocity for this customer.",
  schema: z.object({
    customerId: z.string(),
    amount: z.number(),
  }),
  func: async ({ customerId }) => {
    // Replace with real API call or feature store lookup
    const recentCount = customerId === "cust-high-risk" ? 12 : 1;
    return JSON.stringify({ recentCount, thresholdBreached: recentCount > 5 });
  },
});

const sanctionsCheckTool = new DynamicStructuredTool({
  name: "sanctions_check",
  description: "Checks whether the transaction touches a sanctions or watchlist signal.",
  schema: z.object({
    merchantCountry: z.string(),
    customerCountry: z.string(),
    ipCountry: z.string(),
  }),
  func: async ({ merchantCountry }) => {
    const flaggedCountries = ["IR", "KP", "SY"];
    return JSON.stringify({ flagged: flaggedCountries.includes(merchantCountry) });
  },
});

These tools are deterministic. That gives you repeatability and a clean audit trail when compliance asks why a transfer was held.

3) Build the agent with LangChain and force structured output

Use ChatOpenAI with an explicit output schema so the agent returns a controlled decision instead of free-form text.

import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIToolsAgent } from "langchain/agents";
import { AgentExecutor } from "@langchain/core/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { FraudEventSchema } from "./schema";

const DecisionSchema = z.object({
  action: z.enum(["approve", "hold", "step_up_auth", "escalate"]),
  reason: z.string(),
});

const parser = StructuredOutputParser.fromZodSchema(DecisionSchema);

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    [
      "You are a banking fraud triage agent.",
      "Use tools for velocity and sanctions checks.",
      "Do not approve transactions that violate policy.",
      "Return only structured output.",
      parser.getFormatInstructions(),
    ].join("\n"),
  ],
]);

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const tools = [velocityCheckTool, sanctionsCheckTool];

export async function buildFraudAgent() {
  const agent = await createOpenAIToolsAgent({
    llm,
    tools,
    prompt,
  });

```ts
return new AgentExecutor({
    agent,
    tools,
    verbose: false,
});
}

export async function triageTransaction(rawEvent: unknown) {
  const event = FraudEventSchema.parse(rawEvent);

The important pattern here is not “let the model decide.” The important pattern is “let the model orchestrate policy-aware checks and then emit a constrained decision.”

Step through the evaluation flow

export async function evaluateFraud(rawEvent: unknown) {
  const event = FraudEventSchema.parse(rawEvent);
  const executor = await buildFraudAgent();

 
```ts
const input = `
Transaction:
${JSON.stringify(event)}

Policy:
- If sanctions_check flags true => escalate.
- If velocity_check shows thresholdBreached => hold.
- If deviceTrusted is false and amount > 5000 => step_up_auth.
- Otherwise approve if KYC is verified.
`;

const result = await executor.invoke({ input });
const parsed = DecisionSchema.parse(JSON.parse(result.output));

return {
    transactionId: event.transactionId,
    decision: parsed.action,
    reason: parsed.reason,
};
}

This gives you a production-shaped interface. The caller gets one decision object that can be written to an audit log, pushed to case management, or used to trigger step-up authentication.

Production Considerations

  • Deploy in-region

Keep customer data and prompts inside the bank’s approved region. For multi-country operations, route requests by residency policy so you do not move PII across borders illegally.

  • Log everything needed for audit

Store input features, tool outputs, model version, prompt version, final decision, and timestamps. In banking, “the model said so” is not acceptable evidence.

  • Add hard guardrails outside the model

Sanctions hits, KYC failures, amount thresholds, and AML rules should short-circuit before LLM reasoning. Use the agent for orchestration and explanation, not as the source of truth.

  • Monitor drift and false positives

Track approval rate, manual review rate, chargeback rate, step-up auth conversion, and segment-level false positives. Fraud patterns change quickly; your thresholds will age out faster than your release cycle.

Common Pitfalls

  • Letting the LLM make final compliance decisions

Avoid this by encoding non-negotiable rules in code or policy services. The agent should recommend or orchestrate; it should not override sanctions logic.

  • Passing raw sensitive data into prompts

Do not send full account numbers, full names when unnecessary, or free-form notes with PII. Redact early and pass only features required for triage.

  • Skipping structured outputs

If you accept plain text responses like “looks suspicious,” your downstream systems will break under ambiguity. Use StructuredOutputParser or Zod-backed parsing so every response has a predictable shape.

A fraud detection agent built this way fits banking reality. It is explainable enough for auditors, strict enough for compliance teams, and flexible enough to adapt as fraud patterns shift.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides