How to Build a transaction monitoring Agent Using LangChain in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlangchaintypescripthealthcare

A transaction monitoring agent for healthcare watches claims, payments, and eligibility events for suspicious patterns, policy violations, and operational anomalies. It matters because bad transactions in healthcare are not just financial noise; they can expose PHI, trigger compliance issues, and create downstream billing or care-delivery failures.

Architecture

  • Event ingestion layer

    • Pulls claim submissions, payment events, adjustments, and eligibility checks from your source system or message bus.
    • Normalizes each event into a consistent schema before it reaches the agent.
  • Rules + LLM decision layer

    • Uses deterministic rules for hard stops like duplicate claim IDs or invalid provider IDs.
    • Uses LangChain to classify ambiguous cases like unusual billing patterns or inconsistent diagnosis-code combinations.
  • Context retrieval layer

    • Fetches policy snippets, payer rules, internal SOPs, and prior case notes from a vector store.
    • Keeps the model grounded in approved healthcare policy instead of free-form reasoning.
  • Case management output layer

    • Writes alerts to a queue, SIEM, ticketing system, or fraud review dashboard.
    • Includes structured fields: risk score, reason codes, evidence references, and recommended next action.
  • Audit and compliance layer

    • Logs prompts, model outputs, retrieved documents, and final decisions.
    • Supports HIPAA auditability, retention policies, and data residency constraints.

Implementation

1) Define the transaction schema and the monitoring prompt

Keep the input small and structured. Do not feed raw PHI unless you absolutely need it; use tokenized patient identifiers where possible.

import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StructuredOutputParser } from "@langchain/core/output_parsers";

const TransactionSchema = z.object({
  transactionId: z.string(),
  eventType: z.enum(["claim", "payment", "adjustment", "eligibility"]),
  providerId: z.string(),
  memberIdHash: z.string(),
  amount: z.number().nonnegative(),
  cptCode: z.string().optional(),
  icd10Code: z.string().optional(),
  timestamp: z.string(),
});

const AlertSchema = z.object({
  riskScore: z.number().min(0).max(100),
  decision: z.enum(["approve", "review", "escalate"]),
  reasonCodes: z.array(z.string()),
  summary: z.string(),
});

const parser = StructuredOutputParser.fromZodSchema(AlertSchema);

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a healthcare transaction monitoring agent.
Classify the transaction using payer policy awareness and fraud/abuse heuristics.
Do not invent facts. If evidence is weak, choose review.
Return only valid JSON matching this schema:
{format_instructions}`,
  ],
  [
    "human",
    `Transaction:
{transaction}

Known policy context:
{policyContext}`,
  ],
]);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

2) Add retrieval for healthcare policy context

In production you want the agent grounded in current payer rules and internal SOPs. RunnableLambda is useful for shaping context before it hits the model.

import { RunnableLambda } from "@langchain/core/runnables";

async function fetchPolicyContext(eventType: string): Promise<string> {
  const policies = {
    claim: "Flag duplicate claims submitted within 7 days. Escalate if billed amount exceeds expected threshold by >30%.",
    payment: "Review reversals above $10k or repeated reversals on same provider account.",
    adjustment: "Escalate adjustments that remove diagnosis codes after adjudication.",
    eligibility: "Review repeated eligibility checks against same member within short intervals.",
  };
  return policies[eventType as keyof typeof policies] ?? "No specific policy found.";
}

const buildContext = RunnableLambda.from(async (input: { transaction: unknown }) => {
  const tx = TransactionSchema.parse(input.transaction);
  const policyContext = await fetchPolicyContext(tx.eventType);
  return {
    transaction: JSON.stringify(tx),
    policyContext,
    format_instructions: parser.getFormatInstructions(),
  };
});

3) Compose the chain and run a transaction through it

This is the actual pattern you want in a service endpoint or worker. The chain returns structured output that your downstream case system can consume directly.

import { RunnableSequence } from "@langchain/core/runnables";

const monitoringChain = RunnableSequence.from([
  buildContext,
  prompt,
  model,
]);

export async function monitorTransaction(rawTransaction: unknown) {
  const result = await monitoringChain.invoke({ transaction: rawTransaction });
  
  const parsed = await parser.parse(result.content.toString());
  
  

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides