How to Build a transaction monitoring Agent Using LangChain in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlangchaintypescriptpension-funds

A transaction monitoring agent for pension funds watches contribution, transfer, withdrawal, and benefit-payment activity, then flags patterns that look inconsistent with scheme rules, AML policy, or regulatory expectations. It matters because pension administrators handle long-lived accounts with high trust, strict audit requirements, and low tolerance for false negatives or unexplained decisions.

Architecture

  • Transaction ingestion layer

    • Pulls events from the pension admin platform, payment rails, or a Kafka topic.
    • Normalizes records into a single schema: member ID, amount, currency, counterparty, timestamp, channel, and transaction type.
  • Rules and risk feature extractor

    • Computes deterministic signals before any LLM call.
    • Examples: velocity checks, unusual withdrawal size, beneficiary changes close to payout date, repeated reversals, dormant-account activity.
  • LangChain decision agent

    • Uses ChatOpenAI plus structured tools to classify risk and explain why.
    • Produces a JSON decision with riskLevel, reasonCodes, and recommendedAction.
  • Case management sink

    • Writes alerts to a queue or case system for analyst review.
    • Persists the full prompt context, model output, and rule inputs for audit.
  • Policy and compliance layer

    • Enforces jurisdiction-specific controls.
    • Handles data residency rules by routing EU member data to EU-hosted infrastructure only.

Implementation

  1. Define the transaction schema and deterministic pre-checks

Start with hard signals. For pension funds, you do not want the LLM inventing risk from raw text when a simple rule already catches it.

import { z } from "zod";

export const TransactionSchema = z.object({
  transactionId: z.string(),
  memberId: z.string(),
  schemeId: z.string(),
  type: z.enum(["contribution", "transfer_in", "transfer_out", "withdrawal", "benefit_payment"]),
  amount: z.number().positive(),
  currency: z.string().length(3),
  timestamp: z.string(),
  channel: z.enum(["bank_transfer", "card", "internal_move", "manual"]),
  counterpartyCountry: z.string().optional(),
});

export type Transaction = z.infer<typeof TransactionSchema>;

export function deterministicRisk(tx: Transaction) {
  const reasons: string[] = [];
  let score = 0;

  if (tx.type === "withdrawal" && tx.amount > 50000) {
    score += 40;
    reasons.push("Large withdrawal");
  }

  if (tx.channel === "manual") {
    score += 20;
    reasons.push("Manual processing channel");
  }

  if (tx.counterpartyCountry && ["IR", "KP", "SY"].includes(tx.counterpartyCountry)) {
    score += 50;
    reasons.push("High-risk jurisdiction");
  }

  return { score, reasons };
}
  1. Build a structured LangChain agent that returns an auditable decision

Use ChatOpenAI with StructuredOutputParser so the model cannot free-form its way out of your control. For production monitoring in pensions, structured output is non-negotiable.

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";
import { TransactionSchema, deterministicRisk } from "./transaction";

const DecisionSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  reasonCodes: z.array(z.string()),
  recommendedAction: z.enum(["allow", "review", "freeze_and_escalate"]),
});

const parser = StructuredOutputParser.fromZodSchema(DecisionSchema);

const prompt = PromptTemplate.fromTemplate(`
You are a transaction monitoring analyst for a pension fund.
Use only the provided transaction data and deterministic signals.
Return strictly valid JSON.

Transaction:
{transaction}

Deterministic signals:
{signals}

{format_instructions}
`);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

export async function assessTransaction(rawTx: unknown) {
  const tx = TransactionSchema.parse(rawTx);
  const signals = deterministicRisk(tx);

  const chain = prompt.pipe(model).pipe(parser);

  return chain.invoke({
    transaction: JSON.stringify(tx),
    signals: JSON.stringify(signals),
    format_instructions: parser.getFormatInstructions(),
  });
}
  1. Add retrieval for scheme policy and jurisdiction rules

For pension funds, decisions depend on scheme-specific policy documents and local regulations. Use retrieval to ground the model in approved policy text instead of asking it to “know” compliance rules.

import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "@langchain/core/documents";

const docs = [
  new Document({ pageContent: "Withdrawals over £25k require enhanced review.", metadata: { source: "scheme-policy" } }),
  new Document({ pageContent: "EU member data must remain in EU-hosted systems.", metadata: { source: "data-residency" } }),
];

const vectorStore = await MemoryVectorStore.fromDocuments(docs, new OpenAIEmbeddings());

export async function retrievePolicyContext(query: string) {
  const results = await vectorStore.similaritySearch(query, k=2);
}

Use those retrieved snippets as context in the prompt before classification. That keeps the agent aligned with internal policy and reduces hallucinated compliance logic.

  1. Wire the result into an alerting workflow

The agent should not directly block payments without policy approval. It should emit an explainable decision into your case system with enough evidence for an investigator to review.

type MonitoringResult = {
  transactionId: string;
};

export async function monitorTransaction(rawTx: unknown) {
})();

A practical pattern is:

  • run deterministic scoring first
  • retrieve relevant policy text
  • ask the LLM for a structured decision
  • persist input/output plus model version
  • route high risk items to analysts

That gives you traceability without turning the LLM into the system of record.

Production Considerations

  • Data residency

Keep member data in-region. If your fund operates across the UK/EU/US split prompts by jurisdiction and use separate model endpoints or private deployments where required.

  • Auditability

Store:

  • raw transaction payload
  • deterministic features
  • retrieved policy snippets
  • final structured decision
  • model name and version

This is what internal audit will ask for after the first suspicious transfer case.

  • Guardrails

Never let the agent take irreversible actions alone on pension withdrawals or transfers. Use it as a triage layer that recommends review or freeze_and_escalate, then hand off to human operations under maker-checker controls.

  • Monitoring

Track false positives by transaction type:

  • contributions usually have lower fraud value but can reveal payroll anomalies
  • transfers out often need stricter scrutiny
  • benefit payments need tight exception handling around retirement eligibility rules

Also watch latency. Pension ops teams hate slow queues when batch runs hit month-end processing windows.

Common Pitfalls

  1. Using the LLM as the first line of defense

    Don’t start with prompts. Start with rules and features that are explainable and cheap. The model should interpret context, not replace basic controls like threshold checks and sanctioned-country screening.

  2. Letting outputs stay free-form

    Free-text “analysis” is hard to audit and impossible to automate safely. Force JSON using StructuredOutputParser or a Zod-backed schema so downstream systems can route cases deterministically.

  3. Ignoring pension-specific policy constraints

    Generic AML logic is not enough. Pension funds deal with scheme rules, retirement eligibility checks, trustee oversight, and regional data handling requirements; bake those into retrieval and routing from day one.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides