How to Build a transaction monitoring Agent Using LangChain in TypeScript for retail banking
A transaction monitoring agent reviews payment activity, flags suspicious patterns, and routes cases for investigation. In retail banking, that matters because you need to catch fraud, money laundering, and account takeover behavior without flooding analysts with false positives.
Architecture
A production-grade transaction monitoring agent in retail banking usually needs these components:
- •
Transaction ingestion layer
- •Pulls card payments, ACH, wire transfers, cash deposits, and account events from your core systems.
- •Normalizes raw records into a consistent schema before the agent sees them.
- •
Risk scoring and rules engine
- •Applies deterministic checks first: velocity thresholds, geo-distance anomalies, structuring patterns, beneficiary changes.
- •Keeps obvious policy violations out of the LLM path.
- •
LangChain reasoning layer
- •Uses
ChatOpenAI,PromptTemplate, andRunnableSequenceto classify cases and explain why they were flagged. - •Produces structured outputs for downstream case management.
- •Uses
- •
Case enrichment tools
- •Pulls customer profile data, historical alerts, sanctions hits, device metadata, and recent account behavior.
- •Gives the model context without exposing more data than necessary.
- •
Audit and evidence store
- •Persists the input features, model output, prompt version, tool calls, and analyst disposition.
- •Required for compliance review and internal model governance.
- •
Human review queue
- •Sends high-risk or ambiguous cases to investigators.
- •Keeps final decisions out of fully automated control where policy requires it.
Implementation
1) Define a typed transaction schema
Start by normalizing each transaction into a stable shape. Don’t pass raw ledger blobs into the model; you want predictable fields for both rules and prompts.
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
customerId: z.string(),
amount: z.number(),
currency: z.string(),
channel: z.enum(["card", "ach", "wire", "cash", "transfer"]),
country: z.string(),
merchantCategory: z.string().optional(),
timestamp: z.string(),
isNewPayee: z.boolean(),
velocity24hCount: z.number(),
velocity24hAmount: z.number(),
});
export type Transaction = z.infer<typeof TransactionSchema>;
This schema becomes your contract across ingestion, scoring, and audit logs. In regulated environments, stable contracts matter more than clever prompts.
2) Build a deterministic pre-screen before LangChain
Use rules to short-circuit low-risk transactions and isolate suspicious ones. This keeps cost down and reduces unnecessary exposure of customer data to the model.
import { Transaction } from "./transaction-schema";
export function preScreen(tx: Transaction): { flag: boolean; reason?: string } {
if (tx.amount >= 10000 && tx.channel === "cash") {
return { flag: true, reason: "Large cash transaction" };
}
if (tx.velocity24hCount >= 8 && tx.velocity24hAmount >= tx.amount * 4) {
return { flag: true, reason: "High velocity pattern" };
}
if (tx.isNewPayee && tx.amount >= 5000) {
return { flag: true, reason: "New payee with elevated amount" };
}
return { flag: false };
}
In retail banking, this step should align with your AML policy thresholds and fraud playbooks. The LLM should explain cases; it should not be the first line of defense.
3) Use LangChain to classify and explain the alert
Here’s a real TypeScript pattern using ChatOpenAI, PromptTemplate, RunnableSequence, and StringOutputParser.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
import { Transaction } from "./transaction-schema";
const prompt = PromptTemplate.fromTemplate(`
You are a retail banking transaction monitoring analyst.
Classify the transaction as one of:
- LOW_RISK
- REVIEW
- HIGH_RISK
Return only valid JSON with keys:
classification, rationale, recommended_action
Customer context:
Customer ID: {customerId}
Transaction ID: {transactionId}
Amount: {amount} {currency}
Channel: {channel}
Country: {country}
Merchant category: {merchantCategory}
Is new payee: {isNewPayee}
24h count: {velocity24hCount}
24h amount: {velocity24hAmount}
Rules:
- Be conservative on unexplained high-value transfers
- Consider structuring indicators
- Mention compliance concerns when relevant
- Do not invent facts
`);
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
export const monitorChain = RunnableSequence.from([
prompt,
llm,
]);
export async function analyzeTransaction(tx: Transaction) {
const result = await monitorChain.invoke({
...tx,
merchantCategory: tx.merchantCategory ?? "unknown",
isNewPayee: String(tx.isNewPayee),
velocity24hCount: String(tx.velocity24hCount),
velocity24hAmount: String(tx.velocity24hAmount),
amount: String(tx.amount),
country: tx.country,
currency: tx.currency,
channel: tx.channel,
customerId: tx.customerId,
transactionId: tx.transactionId,
timestamp: tx.timestamp,
});
const text = typeof result === "string" ? result : result.content?.toString() ?? "";
return JSON.parse(text);
}
The important part here is the pattern:
- •
PromptTemplate.fromTemplate()keeps prompts versionable. - •
ChatOpenAIhandles classification. - •
RunnableSequence.from()makes the workflow explicit. - •
temperature: 0reduces variance in regulatory workflows.
In practice you’ll also want structured parsing with zod or JsonOutputParser, but the above shows the core LangChain flow in TypeScript.
4) Wrap it in an alerting service
Combine rules + LLM reasoning + audit logging. The service should emit both the decision and the evidence trail.
import fs from "node:path";
import fsPromises from "node:fs/promises";
import { preScreen } from "./rules";
import { analyzeTransaction } from "./llm";
import type { Transaction } from "./transaction-schema";
export async function processTransaction(txData: unknown) {
const tx = txData as Transaction;
const screened = preScreen(tx);
if (!screened.flag) {
return {
statusCode": "CLEAR",
reason": screened.reason ?? "No rule hit",
};
}
const analysis = await analyzeTransaction(tx);
const auditRecord = {
...tx,
screeningReason": screened.reason,
analysis,
reviewedAt": new Date().toISOString(),
};
await fsPromises.writeFile(
path.join(process.cwd(), "audit", `${tx.transactionId}.json`),
JSON.stringify(auditRecord, null, "\t")
);
return {
statusCode": analysis.classification,
...analysis,
};
}
That audit record is not optional. For retail banking you need traceability for SAR/STR workflows, internal investigations, model validation, and regulator questions about why an alert was raised.
Production Considerations
- •
Keep customer data residency explicit
- •If your bank requires regional processing, pin inference to approved regions only.
- •Avoid sending full PII into prompts; use tokenized identifiers where possible.
- •
Log everything needed for audit
- •Store prompt version, model name, input features used, output JSON, human disposition.
- •Make logs immutable or append-only so investigators can reconstruct decisions later.
- •
Add guardrails before production rollout
- •Validate all outputs against a schema before alerts hit case management.
- •Reject free-form responses; only accept structured classifications with bounded actions.
- •
Monitor drift and alert quality
| Signal | Why it matters | What to watch |
|---|---|---|
| Alert volume | Sudden spikes often mean bad thresholds or upstream data issues | Alerts per hour/day |
| False positive rate | Too many noisy alerts waste investigator time | Analyst dismissals |
| Latency | Monitoring jobs must keep up with payment flows | P95/P99 inference time |
| Regional routing | Compliance issue if data crosses borders | Model endpoint location |
Common Pitfalls
- •
Letting the LLM replace deterministic controls
- •Don’t ask the model to decide everything.
- •Use rules for hard thresholds and let LangChain explain borderline cases.
- •
Passing raw PII into prompts
- •Don’t include full names, account numbers, or addresses unless absolutely required.
- •Tokenize sensitive fields and keep mappings outside the model path.
- •
Ignoring output validation
- •Never trust free-text classifications directly.
- •Enforce a schema with
zodor a JSON parser before writing to case systems.
- •
Skipping audit/version control
- •If you can’t reproduce why an alert fired six months later, you don’t have a banking-grade system.
- •Version prompts like code and persist every chain input/output pair.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit