How to Build a transaction monitoring Agent Using CrewAI in TypeScript for healthcare
A transaction monitoring agent for healthcare watches claims, payments, refunds, eligibility checks, and provider billing activity for patterns that look abnormal, risky, or non-compliant. It matters because healthcare data is sensitive, billing errors are expensive, and fraud or policy violations can trigger regulatory exposure, audit findings, and patient trust issues.
Architecture
- •
Ingestion layer
- •Pulls transactions from claims systems, payment rails, EHR-adjacent billing feeds, or event streams.
- •Normalizes records into a shared schema before the agent sees them.
- •
Risk scoring toolset
- •Deterministic checks for duplicates, threshold breaches, out-of-network anomalies, and unusual frequency.
- •LLM-assisted reasoning for ambiguous cases that need context from policy docs or prior cases.
- •
CrewAI orchestration
- •A
Crewcoordinates specializedAgents:- •one for transaction triage
- •one for compliance review
- •one for case summarization
- •
Tasks define the exact outputs needed for downstream workflows.
- •A
- •
Policy and evidence store
- •Holds HIPAA policies, payer rules, internal SOPs, and historical case notes.
- •The agent must cite evidence used in its decision so auditors can replay the reasoning.
- •
Case management output
- •Writes alerts to a queue or case system with severity, rationale, confidence, and recommended action.
- •Keeps human-in-the-loop review mandatory for high-risk decisions.
Implementation
1) Install dependencies and define your transaction model
Use the TypeScript CrewAI package plus a small schema layer. In healthcare systems, keep PHI out of prompts unless it is required for the task and explicitly approved by your data handling policy.
npm install crewai zod dotenv
import "dotenv/config";
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
patientId: z.string(),
providerId: z.string(),
amount: z.number(),
currency: z.string().default("USD"),
type: z.enum(["claim", "refund", "payment", "eligibility_check"]),
timestamp: z.string(),
location: z.string().optional(),
diagnosisCode: z.string().optional(),
});
export type Transaction = z.infer<typeof TransactionSchema>;
2) Create tools for deterministic checks and policy lookup
Do not ask the model to infer everything. Give it tools that return concrete facts. That keeps the agent more auditable and reduces hallucinated compliance conclusions.
import { Tool } from "crewai";
const duplicateCheckTool = new Tool({
name: "duplicate_check",
description: "Checks whether this transaction looks like a duplicate based on ID and amount.",
func: async ({ transactionId, amount }: { transactionId: string; amount: number }) => {
const duplicateIds = new Set(["txn_1029", "txn_2044"]);
return JSON.stringify({
isDuplicate: duplicateIds.has(transactionId),
sameAmountPattern: amount > 0 && amount % 25 === 0,
});
},
});
const policyLookupTool = new Tool({
name: "policy_lookup",
description: "Retrieves relevant healthcare billing policy excerpts by keyword.",
func: async ({ query }: { query: string }) => {
const policies = [
{
title: "Refund Handling Policy",
excerpt:
"Refunds above $500 require manual review and documented reason codes.",
},
{
title: "Claim Frequency Policy",
excerpt:
"More than three claims per patient per day requires escalation.",
},
];
return JSON.stringify(
policies.filter((p) =>
`${p.title} ${p.excerpt}`.toLowerCase().includes(query.toLowerCase())
)
);
},
});
3) Build the CrewAI agents and tasks
This is the core pattern. One agent scores risk, another checks compliance context, and a third writes an audit-friendly summary.
import { Agent, Task, Crew } from "crewai";
const triageAgent = new Agent({
role: "Transaction Triage Analyst",
goal:
"Identify suspicious healthcare transactions using deterministic checks and policy context.",
backstory:
"You review claims and payment events for fraud indicators, billing anomalies, and operational risk.",
tools: [duplicateCheckTool],
});
const complianceAgent = new Agent({
role: "Healthcare Compliance Reviewer",
goal:
"Determine whether the transaction violates internal healthcare billing policy or requires escalation.",
backstory:
"You understand audit requirements, HIPAA-sensitive handling rules, and payer policy constraints.",
tools: [policyLookupTool],
});
const summaryAgent = new Agent({
role: "Case Writer",
goal:
"Produce a concise audit-ready case summary with clear next actions.",
});
export async function monitorTransaction(txnInput: unknown) {
const txn = TransactionSchema.parse(txnInput);
const triageTask = new Task({
description: `Assess this healthcare transaction for fraud or anomaly signals:\n${JSON.stringify(txn)}`,
expectedOutput:
"A JSON object with riskLevel, reasons[], confidenceScore, and recommendedAction.",
agent: triageAgent,
asyncExecution: false,
});
const complianceTask = new Task({
description:
`Check this transaction against healthcare policy rules:\n${JSON.stringify(txn)}`,
expectedOutput:
"A JSON object with complianceStatus, citedPolicy[], escalationRequired.",
agent: complianceAgent,
context: [triageTask],
asyncExecution: false,
});
}
Continue the workflow by assembling the crew and running it synchronously so you can control retries and logging in your service layer.
export async function monitorTransaction(txnInput: unknown) {
const txn = TransactionSchema.parse(txnInput);
const triageTask = new Task({
description: `Assess this healthcare transaction for fraud or anomaly signals:\n${JSON.stringify(txn)}`,
expectedOutput:
"A JSON object with riskLevel, reasons[], confidenceScore, and recommendedAction.",
agent: triageAgent,
asyncExecution: false,
});
const complianceTask = new Task({
description:
`Check this transaction against healthcare policy rules:\n${JSON.stringify(txn)}`,
expectedOutput:
"A JSON object with complianceStatus, citedPolicy[], escalationRequired.",
agent: complianceAgent,
context:[triageTask],
asyncExecution:false,
});
const summaryTask = new Task({
description:
`Write an audit-ready case summary using the outputs of prior tasks.`,
expectedOutput:
"A short incident summary with severity and reviewer notes.",
agent:outlineAgent,
});
}
Use a final implementation pattern like this in your service:
import { Agent as CrewAgent } from "crewai";
const outlineAgent = new CrewAgent({
role:"Case Summarizer",
goal:"Generate an audit-ready summary for human reviewers."
});
Then run the crew:
const crew = new Crew({
agents:[triageAgent as any , complianceAgent as any , outlineAgent as any],
tasks:[triageTask , complianceTask , summaryTask],
verbose:true,
});
const result = await crew.kickoff();
return result;
}
Production Considerations
- •
Keep PHI out of prompts unless required
- •
Redact patient identifiers before sending data to the model when possible.
- •
If full identifiers are needed for matching logic, isolate that step in deterministic code outside the LLM path.
- •
Log every decision path
- •
Store input hashes, tool outputs, task results, model version, timestamps, and reviewer overrides.
- •
Healthcare audits often require showing why an alert fired months later.
- •
Enforce data residency
- •
Route workloads through approved regions only.
- •
If your organization requires in-country processing for protected health information or billing data, pin both model inference and storage to that region.
- •
Use human approval gates
- •
Auto-close low-risk operational noise if allowed.
- •
Force manual review for high-value refunds, repeated claim submissions, unusual provider patterns, or anything touching suspected fraud.
Common Pitfalls
- •
Sending raw PHI into every prompt
- •This increases exposure without improving detection quality.
- •Avoid it by precomputing features like counts, amounts, frequency bands, and redacted identifiers.
- •
Letting the LLM make final compliance decisions
- •The model should assist; it should not be your source of legal truth.
- •Use deterministic policy checks first, then let the agent explain or summarize them.
- •
Skipping audit metadata
- •If you cannot reconstruct why an alert was raised, you do not have a production monitoring system.
- •Persist task inputs/outputs as well as tool responses so compliance teams can replay the case end-to-end.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit