How to Build a transaction monitoring Agent Using LangChain in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlangchaintypescriptinsurance

A transaction monitoring agent for insurance watches policy payments, claims payouts, refunds, and endorsements for suspicious or non-compliant patterns. It matters because insurers need to catch fraud, sanction exposure, duplicate payments, and unusual activity early, while keeping a clean audit trail for regulators and internal risk teams.

Architecture

  • Event ingestion layer

    • Pulls transactions from policy admin systems, claims platforms, payment processors, or Kafka topics.
    • Normalizes records into a consistent schema before the agent sees them.
  • Risk rules engine

    • Applies deterministic checks first: amount thresholds, velocity spikes, duplicate bank accounts, policy age anomalies.
    • Keeps obvious cases out of the LLM path.
  • LangChain decision agent

    • Uses ChatOpenAI plus tools to classify the event, explain why it looks suspicious, and decide whether to escalate.
    • Produces structured output for downstream case management.
  • Evidence retrieval layer

    • Retrieves policy history, prior claims, customer KYC notes, sanctions hits, and prior alerts.
    • Gives the agent context so it does not guess.
  • Case management sink

    • Writes alerts into a queue, ticketing system, or SIEM.
    • Stores the model output, retrieved evidence IDs, and rule hits for audit.
  • Audit and observability

    • Logs prompts, tool calls, model versions, latency, and final decisions.
    • Required for compliance reviews and post-incident analysis.

Implementation

1) Define the transaction shape and risk rules

Keep the schema explicit. Insurance data is messy; if you let free-form objects into the agent layer you will get brittle prompts and bad audits.

import { z } from "zod";

export const TransactionSchema = z.object({
  transactionId: z.string(),
  policyId: z.string(),
  customerId: z.string(),
  type: z.enum(["premium_payment", "claim_payout", "refund", "endorsement"]),
  amount: z.number().positive(),
  currency: z.string().length(3),
  country: z.string().length(2),
  timestamp: z.string().datetime(),
  paymentMethod: z.enum(["bank_transfer", "card", "cash", "check"]),
});

export type Transaction = z.infer<typeof TransactionSchema>;

export function applyRules(txn: Transaction) {
  const hits: string[] = [];

  if (txn.amount >= 10000) hits.push("high_value_transaction");
  if (txn.type === "refund" && txn.amount >= 5000) hits.push("large_refund");
  if (txn.paymentMethod === "cash") hits.push("cash_payment");
  if (["IR", "KP", "RU"].includes(txn.country)) hits.push("high_risk_jurisdiction");

  return hits;
}

2) Build a structured LangChain agent prompt

For insurance monitoring you want structured output, not chatty prose. Use ChatOpenAI with withStructuredOutput() so the agent returns fields your case system can consume directly.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { TransactionSchema } from "./schema";

const DecisionSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  escalate: z.boolean(),
  reasons: z.array(z.string()),
});

type Decision = z.infer<typeof DecisionSchema>;

const prompt = ChatPromptTemplate.fromMessages([
  ["system",
    `You are a transaction monitoring analyst for an insurance company.
Classify transactions using policy data and evidence.
Be strict on compliance. If data is missing, say so explicitly.
Return only structured output.`],
  ["human",
    `Transaction:
{transaction}

Rule hits:
{ruleHits}

Evidence:
{evidence}`],
]);

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

export async function analyzeTransaction(
  
import { RunnableSequence } from "@langchain/core/runnables";

const structuredLLM = llm.withStructuredOutput(DecisionSchema);

export async function analyzeTransaction(transaction: unknown) {
  const txn = TransactionSchema.parse(transaction);
  const ruleHits = applyRules(txn);

  const evidence = [
    `Policy ${txn.policyId} payment history available`,
    `Customer ${txn.customerId} prior claim count checked`,
    `Jurisdiction ${txn.country} screened`,
  ].join("\n");

  const chain = RunnableSequence.from([
    async () => ({
      transaction: JSON.stringify(txn),
      ruleHits: ruleHits.length ? ruleHits.join(", ") : "none",
      evidence,
    }),
    prompt,
    structuredLLM,
  ]);

  
import { RunnableSequence } from "@langchain/core/runnables";

const structuredLLM = llm.withStructuredOutput(DecisionSchema);

export async function analyzeTransaction(transaction: unknown) {
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { z } from "zod";
import { TransactionSchema } from "./schema";

const DecisionSchema = z.object({
riskLevel: z.enum(["low", "medium", "high"]),
escalate: z.boolean(),
reasons: z.array(z.string()),
});

const prompt = ChatPromptTemplate.fromMessages([
["system", `You are a transaction monitoring analyst for an insurance company.`],
["human", `Transaction:\n{transaction}\n\nRule hits:\n{ruleHits}\n\nEvidence:\n{evidence}`],
]);

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const structuredLLM = llm.withStructuredOutput(DecisionSchema);

export async function analyzeTransaction(transaction: unknown) {
const txn = TransactionSchema.parse(transaction);
const ruleHits = applyRules(txn);

const evidence = [
`Policy ${txn.policyId} payment history available`,
`Customer ${txn.customerId} prior claim count checked`,
`Jurisdiction ${txn.country} screened`,
].join("\n");

const chain = RunnableSequence.from([
async () => ({
transaction: JSON.stringify(txn),
ruleHits: ruleHits.length ? ruleHits.join(", ") : "none",
evidence,
}),
prompt,
structuredLLM,
]);

return chain.invoke({});
}

A cleaner production pattern

The real pattern is to run rules first, then call the LLM only when needed. That keeps cost down and gives compliance teams deterministic controls.

export async function monitor(transaction: unknown) {
  const txn = TransactionSchema.parse(transaction);
  
export async function monitor(transaction: unknown) {
  const txn = TransactionSchema.parse(transaction);
  

Production Considerations

  • Keep PII out of prompts unless necessary

    • Mask account numbers, national IDs, and full addresses before sending context to the model.
    • Store raw values in your secure systems of record with access controls.
  • Respect data residency

    • Insurance data often has jurisdictional constraints by country or line of business.
    • Pin model endpoints and vector stores to approved regions; do not send EU customer data to an unapproved US-hosted service.
  • Log everything needed for audit


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides