How to Build a transaction monitoring Agent Using LangChain in TypeScript for payments
A transaction monitoring agent watches payment events, scores them for risk, and decides whether to allow, review, or escalate them. For payments teams, this matters because fraud, AML flags, sanctions hits, and operational anomalies all show up as transactions moving through the system in real time.
Architecture
- •
Event ingestion layer
- •Pulls payment events from Kafka, SQS, webhooks, or a database outbox.
- •Normalizes raw payment payloads into a stable internal schema.
- •
Risk enrichment layer
- •Adds customer profile data, merchant category code, country pairs, velocity stats, device signals, and historical behavior.
- •Keeps enrichment deterministic so the agent is explainable.
- •
LangChain decision agent
- •Uses
ChatOpenAI,PromptTemplate, andRunnableSequenceto classify the event. - •Produces structured output:
allow,review, orblock, plus reasons.
- •Uses
- •
Policy and compliance layer
- •Applies hard rules before or after the model: sanctions countries, PEP flags, threshold breaches, and jurisdiction-specific controls.
- •Stores every decision with traceable evidence for audit.
- •
Case management sink
- •Sends suspicious cases to a queue or case tool for analyst review.
- •Writes immutable logs for compliance and model governance.
Implementation
1) Define the transaction schema and risk features
Keep the input contract explicit. In payments systems, loose JSON becomes an incident when auditors ask why a decision was made six months later.
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
accountId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
country: z.string().length(2),
merchantCategoryCode: z.string(),
timestamp: z.string(),
customerRiskTier: z.enum(["low", "medium", "high"]),
velocity24hCount: z.number().int().nonnegative(),
velocity24hAmount: z.number().nonnegative(),
sanctionsMatch: z.boolean(),
});
export type Transaction = z.infer<typeof TransactionSchema>;
export function buildRiskFeatures(txn: Transaction) {
return {
...txn,
highAmount: txn.amount >= 10000,
unusualVelocity: txn.velocity24hCount >= 10 || txn.velocity24hAmount >= 50000,
crossBorder: txn.country !== "US",
complianceTrigger: txn.sanctionsMatch || txn.customerRiskTier === "high",
};
}
2) Build a structured LangChain classifier
Use ChatOpenAI with structured output so the model returns machine-readable decisions. This is better than parsing free text in production.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { z } from "zod";
import { buildRiskFeatures, TransactionSchema } from "./transaction";
const DecisionSchema = z.object({
action: z.enum(["allow", "review", "block"]),
reason: z.string(),
signals: z.array(z.string()),
});
type Decision = z.infer<typeof DecisionSchema>;
const prompt = PromptTemplate.fromTemplate(`
You are a transaction monitoring agent for payments.
Return a decision based on the transaction and risk features.
Rules:
- block if sanctionsMatch is true
- review if unusualVelocity is true or highAmount is true
- block if complianceTrigger is true
- otherwise allow
Transaction:
{transactionJson}
Return JSON with keys action, reason, signals.
`);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const chain = RunnableSequence.from([
async (input: unknown) => {
const txn = TransactionSchema.parse(input);
return {
transactionJson: JSON.stringify(buildRiskFeatures(txn), null, 2),
};
},
prompt,
]);
export async function scoreTransaction(input: unknown): Promise<Decision> {
const result = await chain.pipe(llm.withStructuredOutput(DecisionSchema)).invoke(input);
if (result.action === "block" && result.signals.length === 0) {
throw new Error("Invalid decision payload");
}
return result;
}
3) Add hard policy gates before the model
For payments, the LLM should not be the only control. Deterministic rules handle non-negotiables like sanctions screening and residency constraints.
import { Transaction } from "./transaction";
export function applyPolicyGates(txn: Transaction) {
const blockedCountries = new Set(["IR", "KP", "SY"]);
if (blockedCountries.has(txn.country)) {
return {
action: "block" as const,
reason: "Country is blocked by policy",
signals: ["sanctions_country"],
};
}
if (txn.sanctionsMatch) {
return {
action: "block" as const,
reason: "Sanctions screening hit",
signals: ["sanctions_match"],
};
}
return null;
}
4) Wire it into an async handler
This is the pattern you actually deploy behind a queue consumer or webhook processor.
import { scoreTransaction } from "./agent";
import { applyPolicyGates } from "./policy";
import { TransactionSchema } from "./transaction";
export async function handlePaymentEvent(rawEvent: unknown) {
const txn = TransactionSchema.parse(rawEvent);
const gateDecision = applyPolicyGates(txn);
if (gateDecision) {
await persistDecision(txn.transactionId, gateDecision);
return gateDecision;
}
const modelDecision = await scoreTransaction(txn);
await persistDecision(txn.transactionId, modelDecision);
if (modelDecision.action === "review") {
await createCase(txn.transactionId, modelDecision.reason);
}
return modelDecision;
}
async function persistDecision(transactionId: string, decision: unknown) {
console.log("persist", transactionId, JSON.stringify(decision));
}
async function createCase(transactionId: string, reason: string) {
console.log("case created", transactionId, reason);
}
Production Considerations
- •
Keep deterministic controls outside the model
Sanctions blocks, jurisdiction restrictions, and threshold rules should run before any LLM call. That gives you predictable enforcement and cleaner audit trails.
- •
Log every decision with evidence
Store input features, rule hits, model version, prompt version, output JSON, and final action. In payments compliance reviews, “the model said so” is not evidence.
- •
Respect data residency
If your payment data must stay in-region, run the agent in that region and avoid sending raw PANs or personal data to external services. Tokenize sensitive fields before they reach LangChain.
- •
Monitor drift and false positives
Track approval rate by merchant category, country pair, customer tier, and hour of day. A spike in manual reviews usually means your thresholds are off or your prompt changed behavior.
Common Pitfalls
- •
Using free-form LLM output
Don’t parse plain text decisions with regex. Use
withStructuredOutput()and validate with Zod so your downstream systems get typed responses. - •
Letting the agent see too much sensitive data
Don’t pass raw card numbers, CVV values, or unnecessary PII into prompts. Minimize fields early and mask anything not needed for risk scoring.
- •
Treating compliance as an LLM task
Don’t ask the model to decide whether sanctions rules apply. That belongs in deterministic policy code with versioned rule sets and auditable logic.
If you build this way, LangChain handles orchestration while your payment controls stay explicit. That’s the right split for regulated transaction monitoring systems.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit