How to Build a transaction monitoring Agent Using LlamaIndex in TypeScript for lending
A transaction monitoring agent for lending watches borrower activity, flags suspicious or policy-breaking patterns, and turns raw transactions into actionable case notes for analysts. It matters because lending teams need faster fraud detection, early delinquency signals, and defensible audit trails without burying investigators in false positives.
Architecture
- •
Transaction ingestion layer
- •Pulls loan events from core banking, card processors, ACH rails, or internal ledgers.
- •Normalizes fields like
borrowerId,loanId,amount,merchantCategory,timestamp, andjurisdiction.
- •
Risk rules and policy engine
- •Encodes lending-specific controls such as velocity checks, cash advance restrictions, round-dollar structuring, and unusual repayment behavior.
- •Produces deterministic flags before any LLM reasoning.
- •
LlamaIndex agent layer
- •Uses
ReActAgentorOpenAIAgentto interpret flagged transactions. - •Calls tools for retrieval, scoring, and case summarization.
- •Uses
- •
Retrieval layer
- •Stores policy docs, underwriting rules, prior SAR-style narratives, and analyst playbooks in a vector index.
- •Uses
VectorStoreIndexandRetrieverQueryEngineto ground the agent in internal policy.
- •
Case management output
- •Writes structured alerts with reason codes, evidence snippets, confidence, and next actions.
- •Feeds downstream review queues and audit logs.
- •
Audit and governance layer
- •Captures prompts, tool calls, retrieved sources, model outputs, and analyst overrides.
- •Supports compliance review, data residency controls, and retention policies.
Implementation
1) Install the packages and define your domain types
Use LlamaIndex’s TypeScript SDK plus a real chat model provider. For production lending workflows, keep transaction metadata typed so you can enforce policy before the agent sees anything sensitive.
npm install llamaindex zod
import {
Document,
VectorStoreIndex,
OpenAI,
OpenAIAgent,
QueryEngineTool,
} from "llamaindex";
type Transaction = {
transactionId: string;
borrowerId: string;
loanId: string;
amount: number;
currency: string;
merchantCategory: string;
country: string;
timestamp: string;
};
type Alert = {
transactionId: string;
riskLevel: "low" | "medium" | "high";
reasons: string[];
};
2) Load lending policies into a retrieval index
Your agent should not guess policy. Put underwriting guidance, collections rules, AML escalation notes, and regional constraints into documents that can be retrieved at runtime.
const policyDocs = [
new Document({
text: `
Lending transaction monitoring policy:
- Flag cash-like activity above $5,000 within a rolling 7-day window.
- Flag repayment from third-party accounts unless explicitly approved.
- Escalate if borrower transacts in restricted jurisdictions.
- Preserve audit evidence for all high-risk alerts.
`,
metadata: { source: "lending-policy-v1", region: "US" },
}),
];
const index = await VectorStoreIndex.fromDocuments(policyDocs);
const retriever = index.asRetriever({ similarityTopK: 3 });
const queryEngine = index.asQueryEngine();
3) Wrap retrieval as a tool and build the agent
The pattern here is simple: deterministic checks first, retrieval second, agent explanation last. That keeps the system defensible when an investigator asks why a transaction was flagged.
const policyTool = QueryEngineTool.fromDefaults({
queryEngine,
});
const llm = new OpenAI({
model: "gpt-4o-mini",
});
const agent = new OpenAIAgent({
tools: [policyTool],
llm,
});
4) Score a transaction and generate an alert
Do not send every transaction straight to the model. Apply hard rules first, then let the agent explain the alert using retrieved policy text.
function ruleScore(txn: Transaction): { score: number; reasons: string[] } {
const reasons: string[] = [];
let score = 0;
if (txn.amount >= 5000) {
score += 40;
reasons.push("High-value transaction");
}
if (txn.merchantCategory === "cash_advance") {
score += 35;
reasons.push("Cash-like activity");
}
if (["IR", "KP", "SY"].includes(txn.country)) {
score += 50;
reasons.push("Restricted jurisdiction");
}
return { score, reasons };
}
async function monitorTransaction(txn: Transaction): Promise<Alert> {
const scored = ruleScore(txn);
const prompt = `
Review this lending transaction for monitoring:
${JSON.stringify(txn)}
Deterministic reasons:
${scored.reasons.join(", ") || "none"}
Use the lending policy tool to confirm whether this should be escalated.
Return a concise explanation suitable for an analyst case note.
`;
const response = await agent.chat(prompt);
return {
transactionId: txn.transactionId,
riskLevel:
scored.score >= 70 ? "high" : scored.score >= 40 ? "medium" : "low",
reasons: [...scored.reasons, response.response],
};
}
Production Considerations
- •
Deploy the model path separately from the rules path
- •Keep deterministic scoring in your service layer.
- •Use the LlamaIndex agent only for explanation and retrieval-backed context.
- •
Log everything needed for audit
- •Store input payload hashes, retrieved document IDs, prompt text, tool outputs, final alert payloads.
- •In lending reviews you need reproducibility across analysts and regulators.
- •
Enforce data residency before indexing "
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit