How to Build a transaction monitoring Agent Using LangGraph in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlanggraphtypescriptinsurance

A transaction monitoring agent for insurance watches policy payments, claims payouts, refunds, premium adjustments, and agent commissions for suspicious patterns. It matters because insurers deal with fraud, sanctions exposure, AML-adjacent controls, and regulatory auditability across multiple jurisdictions.

Architecture

  • Event intake layer

    • Consumes payment events, claim events, refund events, and policy changes from Kafka, SQS, or a webhook API.
    • Normalizes the payload into a single transaction schema.
  • Risk scoring node

    • Applies deterministic rules first: amount thresholds, velocity checks, duplicate payouts, unusual beneficiary changes.
    • Calls an LLM only when the rule engine needs context or narrative classification.
  • Evidence retrieval node

    • Pulls policy history, claim history, customer profile data, and prior alerts from your internal systems.
    • Keeps the agent grounded in insurer-owned facts.
  • Decision node

    • Produces one of three outcomes: clear, review, or escalate.
    • Writes a structured rationale that compliance teams can audit later.
  • Case management output

    • Creates an alert in your case system when risk is high.
    • Attaches evidence, reason codes, and model outputs.
  • Audit and observability layer

    • Stores every node input/output for traceability.
    • Captures model version, prompt version, and policy version for regulator review.

Implementation

1) Define the transaction state and graph nodes

LangGraph in TypeScript is built around a typed state object and nodes that transform it. For insurance monitoring, keep the state explicit so every decision is explainable.

import { StateGraph, START, END } from "@langchain/langgraph";

type Txn = {
  id: string;
  type: "premium" | "claim" | "refund" | "commission";
  amount: number;
  currency: string;
  customerId: string;
  country: string;
  beneficiaryChanged?: boolean;
};

type MonitoringState = {
  txn: Txn;
  riskScore: number;
  reasons: string[];
  decision?: "clear" | "review" | "escalate";
};

const scoreTransaction = async (state: MonitoringState): Promise<Partial<MonitoringState>> => {
  const reasons: string[] = [];
  let riskScore = 0;

  if (state.txn.amount > 10000) {
    riskScore += 40;
    reasons.push("High value transaction");
  }

  if (state.txn.type === "claim" && state.txn.beneficiaryChanged) {
    riskScore += 35;
    reasons.push("Claim payout beneficiary changed");
  }

  if (["IR", "KP", "SY"].includes(state.txn.country)) {
    riskScore += 50;
    reasons.push("High-risk jurisdiction");
  }

  return { riskScore, reasons };
};

const decide = async (state: MonitoringState): Promise<Partial<MonitoringState>> => {
  if (state.riskScore >= 70) return { decision: "escalate" };
  if (state.riskScore >= 40) return { decision: "review" };
  return { decision: "clear" };
};

2) Add an LLM-backed review step for ambiguous cases

Use the LLM only after deterministic checks. That keeps cost down and makes the system easier to defend during audits.

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const llmReview = async (state: MonitoringState): Promise<Partial<MonitoringState>> => {
  const prompt = `
You are reviewing an insurance transaction for fraud/compliance risk.

Transaction:
${JSON.stringify(state.txn)}

Rules already triggered:
${state.reasons.join("; ")}

Return JSON with keys:
- decision: clear | review | escalate
- reason: short explanation
`;

  const response = await llm.invoke(prompt);
  const text = typeof response.content === "string" ? response.content : JSON.stringify(response.content);

    // In production parse JSON strictly with zod or similar.
    const parsed = JSON.parse(text);

    return {
      decision: parsed.decision,
      reasons: [...state.reasons, parsed.reason],
    };
};

3) Wire the graph with conditional routing

This is the core pattern. The graph scores first, then routes low-risk transactions directly to END, while borderline cases go through LLM review.

const graph = new StateGraph<MonitoringState>()
 .addNode("scoreTransaction", scoreTransaction)
 .addNode("llmReview", llmReview)
 .addNode("decide", decide)
 .addEdge(START, "scoreTransaction")
 .addConditionalEdges("scoreTransaction", (state) => {
   if (state.riskScore >= highRiskThreshold) return "decide";
   if (state.riskScore >= reviewThreshold) return "llmReview";
   return "decide";
 })
 .addEdge("llmReview", "decide")
 .addEdge("decide", END);

const app = graph.compile();

const highRiskThreshold = Number(process.env.HIGH_RISK_THRESHOLD ?? "70");
const reviewThreshold = Number(process.env.REVIEW_THRESHOLD ?? "40");

async function run() {
 const result = await app.invoke({
   txn: {
     id: "txn_123",
     type: "claim",
     amount: parseFloat("15000"),
     currency: "USD",
     customerId: "cust_42",
     country: "US",
     beneficiaryChanged: true,
   },
   riskScore: null as unknown as number,
   reasons: [],
 });

 console.log(result);
}

run();

Notes on the implementation

  • StateGraph gives you a typed workflow with explicit transitions.
  • START and END are real LangGraph sentinels used to define entry and exit points.
  • addConditionalEdges is what keeps this from becoming a linear chain of brittle logic.
  • Keep deterministic scoring before any model call. Insurance teams will ask why a case was escalated; rules are easier to justify than prompts.

Production Considerations

  • Compliance logging

    • Persist every input payload, intermediate score, final decision, and prompt version.
    • Store reason codes in a format compliance can query by policy number or claim ID.
  • Data residency

    • Keep EU policyholder data in EU-hosted infrastructure. If your agent calls an external model API, route only redacted fields or use region-specific endpoints.
  • Guardrails


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides