How to Build a transaction monitoring Agent Using LangChain in TypeScript for fintech
A transaction monitoring agent watches payment activity, flags suspicious patterns, and explains why a transaction needs review. For fintech teams, that matters because you need faster fraud detection, cleaner compliance workflows, and an audit trail that survives regulator scrutiny.
Architecture
- •
Transaction ingestion layer
- •Pulls events from your payment processor, Kafka topic, webhook queue, or batch feed.
- •Normalizes fields like
amount,currency,merchant,country,deviceId, andcustomerId.
- •
Risk rules engine
- •Applies deterministic checks first: velocity limits, sanctions matches, unusual geolocation, high-risk MCC codes.
- •Keeps obvious cases out of the LLM path.
- •
LangChain decision agent
- •Uses
ChatOpenAIwith a structured output schema to classify cases. - •Produces a consistent result such as
approve,review, orescalate.
- •Uses
- •
Evidence retrieval layer
- •Pulls customer history, prior alerts, and policy snippets from a vector store or internal API.
- •Gives the model context without dumping raw databases into prompts.
- •
Audit log store
- •Persists input features, model output, prompt version, policy version, and reviewer outcome.
- •This is non-negotiable for compliance and post-incident analysis.
- •
Case management integration
- •Sends flagged transactions to Slack, Jira, ServiceNow, or your AML case system.
- •Lets analysts review with the model’s explanation attached.
Implementation
1) Define the transaction schema and output contract
Start with strict types. In fintech, free-form JSON is how you end up with broken audit trails and inconsistent decisions.
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
customerId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
merchantName: z.string(),
country: z.string().length(2),
deviceId: z.string().optional(),
timestamp: z.string(),
riskSignals: z.array(z.string()).default([]),
});
export const MonitoringDecisionSchema = z.object({
decision: z.enum(["approve", "review", "escalate"]),
reason: z.string(),
riskScore: z.number().min(0).max(100),
requiredAction: z.string(),
});
export type Transaction = z.infer<typeof TransactionSchema>;
export type MonitoringDecision = z.infer<typeof MonitoringDecisionSchema>;
2) Build the LangChain agent with structured output
Use a chat model plus structured output so the response is machine-readable. The key class here is ChatOpenAI, and the key method is .withStructuredOutput().
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { TransactionSchema, MonitoringDecisionSchema } from "./schemas";
const model = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
const monitoringModel = model.withStructuredOutput(MonitoringDecisionSchema);
export async function evaluateTransaction(rawTx: unknown) {
const tx = TransactionSchema.parse(rawTx);
const prompt = [
new HumanMessage(
`
You are a transaction monitoring analyst for a fintech company.
Policy:
- Approve low-risk legitimate transactions.
- Review transactions with weak anomalies or missing context.
- Escalate transactions that suggest fraud, AML risk, sanctions exposure, or identity abuse.
Transaction:
${JSON.stringify(tx)}
`
),
];
const result = await monitoringModel.invoke(prompt);
return result;
}
This pattern gives you deterministic parsing and avoids brittle regex extraction. It also makes it easy to store the exact decision payload in an audit table.
3) Add deterministic pre-checks before the LLM
Do not send every event to the model. Use rules to catch obvious cases and reduce cost and latency.
import { evaluateTransaction } from "./agent";
function applyRules(tx: {
amount: number;
country: string;
riskSignals?: string[];
}) {
if (tx.amount > 10000) {
return { decision: "review", reason: "Amount exceeds manual review threshold" };
}
if (tx.country === "IR" || tx.country === "KP") {
return { decision: "escalate", reason: "High-risk jurisdiction detected" };
}
if ((tx.riskSignals ?? []).includes("sanctions_hit")) {
return { decision: "escalate", reason: "Sanctions signal present" };
}
return null;
}
export async function monitorTransaction(rawTx: unknown) {
const ruleHit = applyRules(rawTx as any);
if (ruleHit) return ruleHit;
return await evaluateTransaction(rawTx);
}
This split matters in production. Rules handle policy-critical cases; the LLM handles ambiguous patterns that need contextual reasoning.
4) Persist decisions for audit and analyst review
Fintech systems need traceability. Store the input snapshot, output decision, policy version, and timestamp in your database before routing to case management.
type AuditRecord = {
transactionId: string;
policyVersion: string;
modelName: string;
inputSnapshot: unknown;
outputSnapshot: unknown;
createdAt: string;
};
export async function saveAuditRecord(record: AuditRecord) {
// Replace with your DB client
console.log("AUDIT_RECORD", JSON.stringify(record));
}
export async function processTransaction(rawTx: unknown) {
const decision = await monitorTransaction(rawTx);
await saveAuditRecord({
transactionId: (rawTx as any).transactionId,
policyVersion: "2026-04-01",
modelName: "gpt-4o-mini",
inputSnapshot: rawTx,
outputSnapshot: decision,
createdAt: new Date().toISOString(),
});
return decision;
}
Production Considerations
- •
Keep sensitive data out of prompts
- •Mask PANs, account numbers, national IDs, and full addresses before sending anything to the model.
- •Only pass what is needed for the decision.
- •
Respect data residency
- •Route EU customer transactions through EU-hosted infrastructure if your regulatory posture requires it.
- •Keep prompt logs and vector stores in-region when handling regulated data.
- •
Add observability
- •Track latency, token usage, escalation rate, false positives, and analyst override rate.
- •Tag every run with
policyVersionandmodelNameso you can compare outcomes after changes.
- •
Build human-in-the-loop controls
- •Auto-approve only low-risk cases with strong rule coverage.
- •Send borderline or high-value transactions to analysts before final action.
Common Pitfalls
- •
Letting the LLM make first-pass decisions on everything
- •This increases cost and creates inconsistent outcomes.
- •Fix it by running deterministic rules first and reserving the model for ambiguous cases.
- •
Using unstructured text outputs
- •Free-form responses are hard to validate and impossible to audit cleanly.
- •Fix it with Zod schemas plus
.withStructuredOutput()so every response matches a contract.
- •
Ignoring compliance metadata
- •If you do not store prompt version, policy version, input snapshot, and final action, your audit trail is weak.
- •Fix it by persisting every decision record alongside analyst feedback and downstream disposition.
A transaction monitoring agent is not just an LLM wrapper. In fintech it has to be predictable enough for operations teams, explainable enough for compliance teams, and strict enough for regulators. Build it around rules first, structure second, then let LangChain handle the reasoning layer where human judgment used to sit.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit