How to Build a transaction monitoring Agent Using LangChain in TypeScript for wealth management
A transaction monitoring agent for wealth management watches client activity, scores transactions against policy and behavioral context, and escalates suspicious cases to compliance or relationship managers. It matters because wealth platforms deal with high-value transfers, cross-border movement, complex entity structures, and strict audit requirements, so manual review does not scale.
Architecture
Build this agent with a small set of components that map cleanly to the workflow:
- •
Transaction ingestion layer
- •Pulls trades, wire transfers, journal entries, and account events from your core systems.
- •Normalizes records into a single schema before they reach the agent.
- •
Policy and risk rules engine
- •Encodes deterministic checks like velocity limits, jurisdiction blocks, sanction hits, and unusual beneficiary changes.
- •Keeps hard compliance rules out of the LLM.
- •
LangChain analysis chain
- •Uses an LLM to summarize patterns, explain anomalies, and recommend next actions.
- •Produces structured output that downstream systems can store and route.
- •
Case management router
- •Opens alerts, assigns severity, and sends cases to human review.
- •Stores the full decision trail for audit.
- •
Audit and evidence store
- •Persists input data, model output, rule hits, timestamps, and reviewer actions.
- •Supports regulatory review and internal model governance.
- •
Guardrail layer
- •Redacts PII where possible, enforces prompt constraints, and blocks unsupported actions.
- •Prevents the model from making final compliance decisions.
Implementation
1) Define a normalized transaction schema
We want a single input shape whether the source is a wire system, brokerage ledger, or CRM event stream. Use zod for validation so bad records fail early.
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
accountId: z.string(),
clientId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
type: z.enum(["wire", "trade", "journal", "deposit", "withdrawal"]),
country: z.string().min(2),
counterpartyName: z.string(),
counterpartyCountry: z.string().min(2),
timestamp: z.string(),
channel: z.enum(["online", "advisor", "ops", "api"]),
});
export type Transaction = z.infer<typeof TransactionSchema>;
This is where you enforce data quality before any model call. In wealth management, garbage in becomes audit pain later.
2) Build a structured LangChain analysis chain
Use ChatOpenAI, PromptTemplate, StructuredOutputParser, and RunnableSequence. The model should return a fixed schema so you can route alerts without parsing free text.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
import { z } from "zod";
const RiskAssessmentSchema = z.object({
riskLevel: z.enum(["low", "medium", "high"]),
rationale: z.string(),
recommendedAction: z.enum(["monitor", "review", "escalate"]),
suspiciousIndicators: z.array(z.string()),
});
const parser = StructuredOutputParser.fromZodSchema(RiskAssessmentSchema);
const prompt = PromptTemplate.fromTemplate(`
You are a transaction monitoring analyst for wealth management.
Use only the provided transaction data and policy context.
Transaction:
{transactionJson}
Policy Context:
{policyContext}
Return output in this format:
{format_instructions}
`);
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
export const riskChain = RunnableSequence.from([
{
transactionJson: (input: { transactionJson: string; policyContext: string }) =>
input.transactionJson,
policyContext: (input: { transactionJson: string; policyContext: string }) =>
input.policyContext,
format_instructions: () => parser.getFormatInstructions(),
},
prompt,
llm,
]);
The important part is the structured output contract. If your case system expects riskLevel and recommendedAction, don’t let the LLM freestyle those fields.
3) Add deterministic pre-checks before the LLM
The agent should not rely on the model for obvious policy violations. Run hard checks first, then pass only relevant context into LangChain.
type PolicyHit = {
code: string;
severity: "low" | "medium" | "high";
};
function evaluateRules(txn: Transaction): PolicyHit[] {
const hits: PolicyHit[] = [];
if (txn.amount >= 1000000) {
hits.push({ code: "HIGH_VALUE_TRANSFER", severity: "medium" });
}
if (txn.country !== txn.counterpartyCountry) {
hits.push({ code: "CROSS_BORDER_ACTIVITY", severity: "low" });
}
if (["IR", "KP", "SY"].includes(txn.counterpartyCountry)) {
hits.push({ code: "RESTRICTED_JURISDICTION", severity: "high" });
}
return hits;
}
Then combine rule results with the LLM summary:
export async function analyzeTransaction(txnInput: unknown) {
const txn = TransactionSchema.parse(txnInput);
const policyHits = evaluateRules(txn);
const result = await riskChain.invoke({
transactionJson: JSON.stringify(txn),
policyContext: JSON.stringify(policyHits),
});
const parsed = parser.parse(result.content as string);
return {
transactionId: txn.transactionId,
...parsed,
policyHits,
reviewedAt: new Date().toISOString(),
};
}
In production, this function becomes your orchestration point. It validates input, applies rules, runs LangChain analysis, then emits an auditable result object.
4) Route high-risk cases to humans
Wealth management needs human-in-the-loop review for anything ambiguous or high impact. The agent should create cases; it should not close them automatically when confidence is low or sanctions exposure exists.
type CaseRecord = {
transactionId: string;
riskLevel : string;
rationale : string;
recommendedAction : string;
policyHits : PolicyHit[];
};
function shouldEscalate(record: CaseRecord): boolean {
return (
record.riskLevel === "high" ||
record.policyHits.some((hit) => hit.severity === "high") ||
record.recommendedAction === "escalate"
);
}
Wire this into your case management service so every alert carries the raw evidence plus model reasoning. That gives compliance teams a clean review trail.
Production Considerations
- •
Keep data residency explicit
- •Wealth clients often require region-specific storage and processing.
- •Pin your vector store, logs, and model endpoints to approved jurisdictions.
- •
Log every decision path
- •Store raw input hashes, rule hits, prompt version, model version, output schema version, and reviewer action.
- •This is non-negotiable for auditability.
- •
Separate detection from disposition
- •Let rules and the LLM detect anomalies.
- •Let humans dispose of alerts when regulatory or reputational impact is material.
- •
Add guardrails around PII
- •Mask account numbers, beneficiary names where possible, and free-text notes before sending prompts.
- •Keep a reversible mapping only in your secure internal systems.
Common Pitfalls
- •
Using the LLM as the primary detector
- •Don’t ask the model to decide whether something is suspicious without rules.
- •Fix it by running deterministic checks first and using LangChain for explanation and triage support.
- •
Returning free-form text instead of structured output
- •Free text breaks downstream alert routing.
- •Fix it with
StructuredOutputParseror another schema-backed parser so every response is machine-readable.
- •
Ignoring compliance metadata
- •If you don’t persist prompt versioning, policy versioning, and reviewer actions, you won’t survive an audit.
- •Fix it by writing all inputs and outputs to an immutable case record with timestamps and trace IDs.
A good wealth management monitoring agent is not just an LLM wrapper. It is a controlled decision pipeline with validation upfront, structured reasoning in the middle, and human accountability at the end.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit