How to Build a transaction monitoring Agent Using AutoGen in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringautogentypescripthealthcare

A transaction monitoring agent for healthcare watches claims, payments, eligibility events, and refund activity for patterns that look wrong: duplicate billing, unusual write-offs, split claims, provider/member mismatches, and policy violations. It matters because these issues hit compliance, reimbursement accuracy, fraud detection, and audit readiness at the same time.

Architecture

  • Event ingestion layer

    • Pulls transactions from claims systems, payment rails, EHR-adjacent workflows, or Kafka topics.
    • Normalizes records into a common schema: member ID, provider ID, CPT/HCPCS codes, amount, timestamp, location, and payer.
  • AutoGen orchestration layer

    • Uses AssistantAgent to analyze each transaction batch.
    • Uses UserProxyAgent to execute deterministic checks and call internal services.
    • Keeps the LLM away from raw PHI where possible.
  • Policy and rules engine

    • Encodes hard rules for healthcare compliance:
      • duplicate claim detection
      • out-of-network mismatches
      • impossible service dates
      • abnormal refund patterns
    • Produces explainable flags before any AI reasoning.
  • Case management output

    • Writes alerts to a queue or case system with severity, reason codes, and evidence.
    • Includes an audit trail for every decision.
  • Security and governance layer

    • Redacts PHI before model calls.
    • Enforces data residency and retention policies.
    • Logs prompt/response metadata without leaking sensitive payloads.

Implementation

1) Install AutoGen and define a healthcare transaction shape

Use the TypeScript AutoGen package and keep your transaction object explicit. In healthcare, vague payloads become compliance problems fast.

npm install @autogenai/autogen openai zod
import { z } from "zod";

export const HealthcareTransactionSchema = z.object({
  transactionId: z.string(),
  memberId: z.string(),
  providerId: z.string(),
  payerId: z.string(),
  serviceDate: z.string(), // ISO date
  postedAt: z.string(),    // ISO date-time
  amount: z.number(),
  currency: z.string().default("USD"),
  codeType: z.enum(["CPT", "HCPCS", "ICD10", "NDC"]),
  code: z.string(),
  locationState: z.string().length(2),
  isRefund: z.boolean().default(false),
});

export type HealthcareTransaction = z.infer<typeof HealthcareTransactionSchema>;

2) Create an assistant agent that explains risk in plain language

The agent should summarize suspicious activity and produce structured output. Keep the prompt narrow so it does not drift into diagnosis or clinical advice.

import { AssistantAgent } from "@autogenai/autogen";
import OpenAI from "openai";

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });

export const fraudAnalyst = new AssistantAgent({
  name: "fraud_analyst",
  modelClient: client,
  systemMessage: `
You review healthcare financial transactions for fraud, waste, abuse, and policy violations.
Do not infer clinical facts. Do not request more PHI than needed.
Return JSON with:
- riskScore (0-100)
- severity ("low" | "medium" | "high")
- reasons (string[])
- recommendedAction ("auto_close" | "manual_review" | "escalate")
`,
});

3) Add a user proxy agent for deterministic checks and tool execution

UserProxyAgent is where you run local validation before the LLM sees anything sensitive. This is the pattern you want in production.

import { UserProxyAgent } from "@autogenai/autogen";

export const complianceBot = new UserProxyAgent({
  name: "compliance_bot",
  humanInputMode: "NEVER",
});

function deterministicChecks(txn: HealthcareTransaction) {
  const reasons: string[] = [];

  if (txn.amount <= 0) reasons.push("non_positive_amount");
  if (txn.isRefund && txn.amount > 5000) reasons.push("large_refund");
  if (txn.serviceDate > txn.postedAt.slice(0, 10)) reasons.push("service_date_after_posting");
  
  return {
    flagged: reasons.length > 0,
    reasons,
    riskScore:
      reasons.includes("service_date_after_posting") ? 85 :
      reasons.includes("large_refund") ? 70 :
      reasons.length ? 40 : 5,
    severity:
      reasons.includes("service_date_after_posting") ? "high" :
      reasons.includes("large_refund") ? "medium" : "low",
    recommendedAction:
      reasons.length > 0 ? "manual_review" : "auto_close",
    reasons,
    txId: txn.transactionId,
  };
}

4) Run the agents together with a guarded workflow

This is the actual orchestration pattern: validate locally, redact PHI, then ask the assistant to interpret the signal. The final output should be persisted as an auditable case record.

async function monitorTransaction(rawTxn: unknown) {
  const txn = HealthcareTransactionSchema.parse(rawTxn);

  
const localResult = deterministicChecks(txn);

  
const redactedTxn = {
    transactionId: txn.transactionId,
    providerIdPrefix: txn.providerId.slice(0, 4),
    payerIdPrefix: txn.payerId.slice(0,4),
    amount: txn.amount,
    currency: txn.currency,
    codeType: txn.codeType,
    codePrefix: txn.code.slice(0,3),
    locationState: txn.locationState,
    isRefund: txn.isRefund,
    localResult,
};

  
const response = await fraudAnalyst.generateReply([
    {
      role: "system",
      content:
        "You are given redacted healthcare payment data plus deterministic rule output. Return only valid JSON.",
    },
    {
      role: "user",
      content: JSON.stringify(redactedTxn),
    },
]);

  
return {
    transactionId: txn.transactionId,
    localResult,
    analystOutputText: response.content,
};
}

const sampleTxn = {
   transactionId:"tx_1001", memberId:"m_12345", providerId:"prov_7788",
   payerId:"payer_22", serviceDate:"2026-04-20", postedAt:"2026-04-20T14:12:00Z",
   amount":12500,"currency":"USD","codeType":"CPT","code":"99213",
   locationState":"CA","isRefund":true
};

monitorTransaction(sampleTxn).then(console.log);

Production Considerations

  • Keep PHI out of model prompts

    • Redact member names, full IDs, addresses, exact dates of birth, and full claim narratives.
    • Send only what is needed for anomaly reasoning.
  • Enforce residency and retention

    • If your healthcare data must stay in-region, pin model endpoints and storage to that region.
    • Store audit logs separately from PHI-bearing records.
  • Make every alert explainable

    • Persist rule hits, model rationale, versioned prompts, and thresholds.
    • Compliance teams need traceability for why a claim was flagged.
  • Add human review gates

For high-severity cases like impossible dates or repeated refund abuse, route to a reviewer before any downstream action. Do not let the agent auto-deny claims without policy approval.

Common Pitfalls

  1. Sending raw PHI into the LLM

    • Avoid it by redacting identifiers and using prefix/tokenized references.
    • Keep enrichment inside your secure backend.
  2. Letting the model make final compliance decisions

    • The model should rank risk and explain signals.
    • Final disposition should come from rules plus human review where required.
  3. Skipping audit metadata

    • Every alert needs transaction ID, model version, prompt hash, rule outputs, timestamp, and reviewer action.
    • Without this you cannot defend decisions during audits or payer disputes.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides