How to Build a transaction monitoring Agent Using LlamaIndex in TypeScript for pension funds
A transaction monitoring agent for pension funds watches contribution, withdrawal, transfer, and benefit-payment activity, then flags patterns that look inconsistent with policy, regulation, or member behavior. It matters because pension administrators need to catch fraud, AML issues, operational errors, and suspicious benefit access early without drowning compliance teams in false positives.
Architecture
- •
Transaction ingestion layer
- •Pulls events from core pension admin systems, payment rails, SFTP drops, or Kafka topics.
- •Normalizes records into a consistent schema: member ID, account ID, amount, currency, timestamp, channel, counterparty.
- •
Policy and rules context
- •Stores pension-specific rules like contribution caps, early withdrawal restrictions, beneficiary changes, and jurisdiction-specific thresholds.
- •Feeds the agent the exact policy text it should cite in decisions.
- •
LlamaIndex retrieval layer
- •Uses
VectorStoreIndexto retrieve relevant policy excerpts, SOPs, and historical case notes. - •Keeps the agent grounded in internal documents instead of free-form reasoning.
- •Uses
- •
Monitoring agent
- •Uses LlamaIndex
OpenAILLM with tools for retrieval and structured analysis. - •Produces a risk assessment plus a short explanation that compliance can audit.
- •Uses LlamaIndex
- •
Case management output
- •Writes alerts to a queue or case system with severity, reasons, evidence links, and recommended next action.
- •Preserves a full audit trail for review.
- •
Controls layer
- •Enforces data residency, PII redaction, and human approval for high-risk actions.
- •Prevents the agent from making autonomous decisions on member funds.
Implementation
- •Install dependencies and define your transaction shape
Use LlamaIndex TS with a vector index for policy retrieval and an LLM-backed agent for analysis. Keep the transaction schema explicit so downstream alerting is deterministic.
npm install llamaindex zod
import {
Document,
VectorStoreIndex,
Settings,
OpenAI,
} from "llamaindex";
type PensionTransaction = {
transactionId: string;
memberId: string;
accountId: string;
type: "contribution" | "withdrawal" | "transfer" | "benefit_payment";
amount: number;
currency: string;
country: string;
timestamp: string;
channel: "bank_transfer" | "card" | "internal_transfer" | "cash";
};
Settings.llm = new OpenAI({
model: "gpt-4o-mini",
});
- •Index your pension policies and procedures
You want the agent to retrieve only the rules that apply to the current case. Build a VectorStoreIndex from policy documents such as AML procedures, benefit payment rules, withdrawal exceptions, and jurisdiction notes.
const policyDocs = [
new Document({
text: `
Pension Fund Withdrawal Policy:
- Early withdrawals require documented hardship evidence.
- Any withdrawal above USD 25,000 requires manual review.
- Beneficiary changes within 30 days of withdrawal are high risk.
`,
metadata: { source: "withdrawal-policy", jurisdiction: "US" },
}),
new Document({
text: `
AML Monitoring Procedure:
- Flag rapid movement of funds across multiple accounts.
- Escalate repeated small contributions just below reporting thresholds.
- Preserve all alert rationale for audit review.
`,
metadata: { source: "aml-procedure", jurisdiction: "global" },
}),
];
const index = await VectorStoreIndex.fromDocuments(policyDocs);
const retriever = index.asRetriever({ similarityTopK: 3 });
- •Build the monitoring function with retrieval-grounded analysis
The pattern here is simple: retrieve relevant policy text first, then ask the LLM to score the transaction against those rules. Return structured output that your case system can ingest.
async function monitorTransaction(txn: PensionTransaction) {
const query = `
Review this pension fund transaction for suspicious activity:
${JSON.stringify(txn)}
Return:
- riskLevel: low|medium|high
- reasons: array of short strings
- recommendedAction: one sentence
- policyReferences: array of exact rule snippets used
`;
const nodes = await retriever.retrieve({ query });
const context = nodes
.map((n) => n.node.getContent())
.join("\n\n");
const llm = Settings.llm as OpenAI;
const response = await llm.complete({
prompt: `
You are a transaction monitoring analyst for a pension fund.
Use only the policy context below.
POLICY CONTEXT:
${context}
TRANSACTION:
${JSON.stringify(txn)}
Assess this transaction conservatively. If evidence is weak but unusual behavior exists, prefer medium risk over low risk.
`,
});
return {
transactionId: txn.transactionId,
analysisText: response.text,
policyContextUsed: context,
};
}
- •Wrap it in an alerting workflow
In production you do not just print model output. Convert it into an alert record with severity and an immutable audit payload. That gives compliance teams something they can review later.
async function createAlert(txn: PensionTransaction) {
const result = await monitorTransaction(txn);
return {
alertId: `alert_${txn.transactionId}`,
transactionId: txn.transactionId,
memberId: txn.memberId,
severity:
result.analysisText.toLowerCase().includes("high") ? "high" : "medium",
status: "open",
auditTrail: {
modelProvider: "openai",
modelNameHint: "gpt-4o-mini",
policyContextUsed: result.policyContextUsed,
rawAnalysisText: result.analysisText,
createdAt: new Date().toISOString(),
},
};
}
Production Considerations
- •
Keep data residency explicit
- •Pension data often cannot leave approved regions.
- •Pin your model endpoint and vector store region to the same jurisdiction as the member data where required by regulation or contract.
- •
Log every retrieval and every prompt
- •Store retrieved document IDs, chunk IDs, timestamps, model version, and final decision text.
- •Auditors will ask why the agent flagged a member; you need traceability down to the exact policy snippet.
- •
Add hard guardrails before actioning alerts
- •The agent should never freeze benefits or block payments automatically.
- •Route high-risk cases to human compliance review with clear evidence references.
- •
Redact PII before indexing non-essential content
- •Do not stuff raw NRICs, bank account numbers, or medical hardship documents into your vector store unless absolutely required.
- •Use field-level masking so retrieval still works without exposing unnecessary sensitive data.
Common Pitfalls
- •
Using generic prompts without pension-specific rules
- •A generic fraud prompt misses things like early withdrawal restrictions or beneficiary-change abuse.
- •Fix it by indexing actual fund policies and asking the model to cite them.
- •
Letting the model decide severity without deterministic thresholds
- •If every alert is purely LLM-driven, your team will get inconsistent outcomes.
- •Fix it by combining rule-based triggers with LLM explanation and keeping severity bands stable.
- •
Ignoring auditability
- •If you cannot reconstruct what documents were retrieved and what prompt was sent, compliance will not trust the system.
- •Fix it by persisting raw prompts, retrieved node IDs, model versioning, and final alert outputs in immutable storage.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit