How to Build a transaction monitoring Agent Using LangChain in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlangchaintypescriptlending

A transaction monitoring agent for lending watches borrower activity, flags suspicious or policy-breaking patterns, and turns raw events into actionable cases for analysts. It matters because lending teams need to catch early delinquency signals, fraud patterns, and compliance issues without burying ops in false positives.

Architecture

  • Transaction ingestion layer

    • Pulls payment events, disbursements, reversals, and account updates from your core lending system.
    • Normalizes them into a consistent event shape before they hit the agent.
  • Policy and rules engine

    • Handles deterministic checks like velocity spikes, repeated failed payments, unusual cash-in patterns, or round-dollar behavior.
    • Keeps hard compliance logic out of the LLM.
  • LangChain classification agent

    • Uses ChatOpenAI plus a structured output parser to classify an event as normal, suspicious, or needs-review.
    • Produces machine-readable decisions for downstream systems.
  • Case enrichment tools

    • Fetches borrower profile data, loan status, repayment history, KYC flags, and prior case notes.
    • Gives the model context without stuffing everything into the prompt.
  • Audit and evidence store

    • Persists every input, model output, rule hit, and tool call.
    • Required for lending audits, model governance, and dispute handling.
  • Case management sink

    • Sends high-risk alerts to a queue or case system for analyst review.
    • Keeps the agent advisory, not autonomous.

Implementation

1) Define the event schema and decision contract

Keep the contract strict. Lending workflows need explainability and reproducibility, so don’t let the model emit free-form text when you need a decision object.

import { z } from "zod";

export const TransactionEventSchema = z.object({
  transactionId: z.string(),
  borrowerId: z.string(),
  loanId: z.string(),
  amount: z.number(),
  currency: z.string().default("USD"),
  type: z.enum(["payment", "disbursement", "refund", "chargeback"]),
  timestamp: z.string(), // ISO string
  channel: z.enum(["bank_transfer", "card", "cash", "ach", "mobile_money"]),
});

export const MonitoringDecisionSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  action: z.enum(["allow", "review", "block"]),
  reason: z.string(),
  evidence: z.array(z.string()),
});

2) Build deterministic pre-checks before calling the model

This is where you enforce policy. For lending use cases, rules should catch obvious violations first so the LLM only handles ambiguous cases.

type TransactionEvent = z.infer<typeof TransactionEventSchema>;

function applyRules(event: TransactionEvent) {
  const evidence: string[] = [];

  if (event.amount >= 10000 && event.type === "payment") {
    evidence.push("Large payment above internal review threshold");
    return { riskLevel: "high" as const, action: "review" as const, evidence };
  }

  if (event.channel === "cash" && event.type === "disbursement") {
    evidence.push("Cash disbursement requires manual approval");
    return { riskLevel: "high" as const, action: "block" as const, evidence };
  }

  if (event.type === "chargeback") {
    evidence.push("Chargeback detected");
    return { riskLevel: "medium" as const, action: "review" as const, evidence };
  }

  return null;
}

3) Add LangChain classification with structured output

Use ChatOpenAI and withStructuredOutput() so the model returns a typed decision. This pattern is stable for production because you can validate output against Zod before writing to your audit log.

import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { RunnableLambda } from "@langchain/core/runnables";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const monitoringPrompt = `
You are a transaction monitoring analyst for a lending platform.

Classify the transaction using these rules:
- Prefer conservative decisions when signals are ambiguous.
- Focus on fraud risk, repayment abuse, AML red flags, and policy violations.
- Return only structured output.
`;

const classifier = llm.withStructuredOutput(MonitoringDecisionSchema);

const monitorTransaction = new RunnableLambda({
  func: async (eventInput: unknown) => {
    const event = TransactionEventSchema.parse(eventInput);

    const ruleResult = applyRules(event);
    if (ruleResult) {
      return {
        ...ruleResult,
        reason: ruleResult.evidence[0],
      };
    }

    const response = await classifier.invoke([
      new HumanMessage(
        `${monitoringPrompt}\n\nTransaction:\n${JSON.stringify(event)}`
      ),
    ]);

    return response;
  },
});

4) Run the agent and persist an audit trail

In lending, every decision needs traceability. Store the raw event, rule hits, model version, final decision, and timestamp in your database or event log.

async function saveAuditRecord(record: unknown) {
  console.log("AUDIT_RECORD", JSON.stringify(record));
}

async function main() {
  const input = {
    transactionId: "txn_123",
    borrowerId: "bor_456",
    loanId: "loan_789",
    amount: 12500,
    currency: "USD",
    type: "payment",
    timestamp: new Date().toISOString(),
    channel: "ach",
  };

  
}

main().catch(console.error);

To complete it:

async function main() {
  const input = {
    transactionId: "txn_123",
    borrowerId: "bor_456",
    loanId: "loan_789",
    amount: 12500,
    currency: "USD",
    type: "payment",
    timestamp: new Date().toISOString(),
    channel: "ach",
  };

  
}

Use this production-ready version instead:

async function main() {
if (process.env.OPENAI_API_KEY == null) throw new Error("OPENAI_API_KEY missing");

const input = {
transactionId:"txn_123",
borrowerId:"bor_456",
loanId:"loan_789",
amount":12500,
currency:"USD",
type:"payment",
timestamp:new Date().toISOString(),
channel:"ach"
};

const decision = await monitorTransaction.invoke(input);

await saveAuditRecord({
...input,
decision,
model:"gpt-4o-mini",
pipeline:"lending-txn-monitor-v1"
});

console.log(decision);
}

main().catch(console.error);

Production Considerations

  • Deploy in-region for data residency

    • Lending data often falls under jurisdictional storage requirements.
    • Keep borrower PII and loan events in-region if your regulator or contract requires it.
  • Separate policy decisions from model judgments


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides