How to Build a transaction monitoring Agent Using LangChain in TypeScript for banking
A transaction monitoring agent watches payment events, scores them against policy and behavioral rules, and flags cases that need human review. In banking, that matters because the cost of missing suspicious activity is regulatory exposure, while the cost of over-flagging is analyst fatigue and bad customer experience.
Architecture
- •
Event ingestion layer
- •Pulls transactions from Kafka, Kinesis, a database queue, or an HTTP webhook.
- •Normalizes raw payment records into a stable schema before any LLM call.
- •
Policy/rules engine
- •Handles deterministic checks first: velocity limits, geography mismatches, high-risk merchant categories, sanctions hits.
- •Keeps obvious decisions out of the model path.
- •
LangChain agent
- •Uses
ChatOpenAIplus tool calling to reason over enriched transaction context. - •Produces a structured risk assessment instead of free-form text.
- •Uses
- •
Case management store
- •Persists alerts, explanations, evidence, and model outputs for audit.
- •Supports analyst review and regulator requests.
- •
Observability and audit trail
- •Logs prompts, tool calls, final decisions, latency, and model version.
- •Required for traceability in banking environments.
- •
Human-in-the-loop review queue
- •Routes high-risk or ambiguous cases to compliance analysts.
- •Prevents fully automated adverse action without oversight.
Implementation
1) Define the transaction schema and risk output
Keep the input strict. Banking workflows fail when you let loosely typed JSON drift across services.
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
accountId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
country: z.string().length(2),
merchantCategory: z.string(),
timestamp: z.string(),
channel: z.enum(["card", "wire", "ach", "mobile"]),
customerTenureDays: z.number().int().nonnegative(),
});
export type Transaction = z.infer<typeof TransactionSchema>;
export const RiskDecisionSchema = z.object({
riskScore: z.number().min(0).max(100),
decision: z.enum(["approve", "review", "escalate"]),
reasons: z.array(z.string()).min(1),
});
2) Build deterministic tools before the model
Do not ask the LLM to rediscover basic compliance logic. Put those checks in tools so they are auditable and repeatable.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
export const checkVelocityTool = tool(
async ({ accountId }: { accountId: string }) => {
// Replace with real datastore query
return {
lastHourCount: 12,
lastHourAmount: 18450,
thresholdBreached: true,
};
},
{
name: "check_velocity",
description: "Check recent transaction velocity for an account",
schema: z.object({
accountId: z.string(),
}),
}
);
export const checkSanctionsTool = tool(
async ({ country }: { country: string }) => {
// Replace with sanctions / watchlist service
return {
hit: country === "IR" || country === "KP",
listName: country === "IR" ? "OFAC" : null,
};
},
{
name: "check_sanctions",
description: "Check whether a country is on a restricted list",
schema: z.object({
country: z.string(),
}),
);
3) Create the LangChain chain with structured output
This pattern keeps the model inside a bounded contract. Use ChatOpenAI, bind your tools, then force structured output with a Zod schema.
import { ChatOpenAI } from "@langchain/openai";
import { RunnableLambda } from "@langchain/core/runnables";
import { TransactionSchema, RiskDecisionSchema } from "./schemas";
import { checkVelocityTool, checkSanctionsTool } from "./tools";
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
const agentPrompt = `You are a transaction monitoring analyst for a bank.
Use tools when needed. Return only structured risk decisions.
Consider compliance rules, suspicious behavior patterns, and false positive reduction.`;
export async function assessTransaction(rawInput: unknown) {
const tx = TransactionSchema.parse(rawInput);
const runnable = RunnableLambda.from(async (input) => {
const velocity = await checkVelocityTool.invoke({ accountId: input.accountId });
const sanctions = await checkSanctionsTool.invoke({ country: input.country });
const prompt = `${agentPrompt}
Transaction:
${JSON.stringify(input)}
Velocity:
${JSON.stringify(velocity)}
Sanctions:
${JSON.stringify(sanctions)}
`;
const response = await llm.withStructuredOutput(RiskDecisionSchema).invoke(prompt);
return response;
});
return runnable.invoke(tx);
}
4) Wire it into an API handler and persist audit data
You need an immutable record of what was seen and why a decision was made. That record should include the raw input hash, tool results, model version, and final output.
import crypto from "crypto";
import express from "express";
import { assessTransaction } from "./agent";
const app = express();
app.use(express.json());
app.post("/monitor/transaction", async (req, res) => {
const rawBody = JSON.stringify(req.body);
const payloadHash = crypto.createHash("sha256").update(rawBody).digest("hex");
const result = await assessTransaction(req.body);
// Persist to your case store here
console.log({
payloadHash,
modelVersion: "gpt-4o-mini",
result,
receivedAt: new Date().toISOString(),
});
res.json({
transactionId: req.body.transactionId,
...result,
});
});
app.listen(3000);
Production Considerations
- •
Deploy in-region
- •Keep data residency aligned with jurisdictional requirements.
- •If customer data cannot leave a region, use regional endpoints and regional storage only.
- •
Log for audit, not just debugging
transactionId | payloadHash | toolCalls | promptVersion | modelVersion | decision | analystOverride
- •Add hard guardrails before any LLM call
If sanctions hit -> escalate
If amount > policy threshold AND new beneficiary -> review
If PII is present in free-text fields -> redact before prompt assembly
- •Monitor drift and alert volume
Track:
- alert rate per segment
- false positive rate after analyst review
- average latency per decision
- override rate by compliance team
Common Pitfalls
- •
Letting the model make first-pass decisions on raw transactions
- •Fix it by running deterministic policy checks first.
- •The agent should explain edge cases, not replace your controls stack.
- •
Sending unnecessary PII into prompts
- •Fix it by redacting names, account numbers, addresses, and free-text notes unless they are required.
- •Banking teams should treat prompts as regulated processing surfaces.
- •
Skipping structured outputs
- •Fix it by using
withStructuredOutput()plus Zod validation. - •Free-form text makes downstream case routing brittle and hard to audit.
- •Fix it by using
- •
Ignoring human review thresholds
- •Fix it by routing borderline scores to analysts instead of auto-escalating everything.
- •A good monitoring agent reduces noise; it does not create another noisy queue.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit