How to Build a compliance checking Agent Using LlamaIndex in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
compliance-checkingllamaindextypescriptpayments

A compliance checking agent for payments reviews a transaction, customer profile, and policy corpus, then returns a decision with evidence: allow, block, or escalate. For payments teams, this matters because every false positive slows revenue, and every false negative creates compliance exposure, audit pain, and potential regulatory issues.

Architecture

  • Policy knowledge base

    • Store PCI DSS rules, AML/KYC policies, sanctions guidance, and internal payment rules as source documents.
    • Index them with LlamaIndex so the agent can retrieve exact clauses instead of guessing.
  • Transaction context builder

    • Normalize payment payloads into a structured input:
      • amount
      • currency
      • country pair
      • merchant category
      • customer risk score
      • device/session metadata
  • Retriever-backed compliance engine

    • Use VectorStoreIndex and a retriever to fetch relevant policy chunks for each transaction.
    • This keeps decisions grounded in your approved policy set.
  • Decision layer

    • Use an LLM through LlamaIndex to classify the case into:
      • ALLOW
      • BLOCK
      • ESCALATE
    • Force structured output so downstream systems can consume it safely.
  • Audit log writer

    • Persist the input, retrieved policy snippets, model output, and final decision.
    • This is what you hand to risk, legal, and regulators.
  • Human review queue

    • Route borderline cases to an analyst when confidence is low or when rules require manual review.

Implementation

  1. Load policy documents and build a vector index

    Start by putting your payment compliance docs in a directory. In practice this includes internal card acceptance rules, AML thresholds, sanctions handling notes, and regional data handling policies.

    import { DirectoryReader } from "llamaindex";
    import { VectorStoreIndex } from "llamaindex";
    
    async function buildPolicyIndex() {
      const reader = new DirectoryReader();
      const docs = await reader.loadData("./policy-docs");
    
      const index = await VectorStoreIndex.fromDocuments(docs);
      return index;
    }
    
  2. Create a retrieval function for payment cases

    The agent should not read the entire policy set on every request. Retrieve only the clauses that match the transaction context: geography, amount band, risk level, and product type.

    import { QueryEngineTool } from "llamaindex";
    
    async function buildComplianceTool(index: VectorStoreIndex) {
      const queryEngine = index.asQueryEngine({
        similarityTopK: 4,
      });
    
      return new QueryEngineTool({
        queryEngine,
        metadata: {
          name: "payment_compliance_policy_search",
          description:
            "Searches payment compliance policies for AML, KYC, sanctions, PCI DSS, and regional handling rules.",
        },
      });
    }
    
  3. Wrap it in an agent that produces structured decisions

    For production payments flows, avoid free-form answers. Ask the model for a strict JSON decision with reasons and citations. You can use OpenAI with ReActAgent or route through a query engine; the key is forcing deterministic structure around an evidence-backed answer.

    import {
      OpenAI,
      ReActAgent,
      Settings,
      type ChatMessage,
    } from "llamaindex";
    
    Settings.llm = new OpenAI({
      model: "gpt-4o-mini",
      temperature: 0,
    });
    
    type ComplianceDecision = {
      decision: "ALLOW" | "BLOCK" | "ESCALATE";
      reasons: string[];
      citations: string[];
    };
    
    async function checkPaymentCompliance(
      tool: QueryEngineTool,
      transaction: Record<string, unknown>
    ): Promise<ComplianceDecision> {
      const agent = new ReActAgent({
        tools: [tool],
        llm: Settings.llm,
        verbose: true,
      });
    
      const prompt = `
    

You are a payment compliance checker. Return only valid JSON with keys: decision, reasons, citations. Decision must be one of ALLOW, BLOCK, ESCALATE.

Transaction: ${JSON.stringify(transaction)}

Use retrieved policy evidence before deciding. `;

 const response = await agent.chat({
   message: prompt,
 } as ChatMessage);

 return JSON.parse(response.message.content) as ComplianceDecision;

}


4. **Persist audit records before you return the result**

Payments teams need traceability. Store the original payload hash, retrieved evidence summary, model version, and final action. If you cannot explain the decision later, you do not have a compliance system; you have a chat app.

```ts
async function handleTransaction() {
  const index = await buildPolicyIndex();
  const tool = await buildComplianceTool(index);

  const transaction = {
    transactionId: "tx_123",
    amount: 12500,
    currency: "USD",
    originCountry: "GB",
    destinationCountry: "NG",
    merchantCategoryCode: "4829",
    customerRiskScore: 82,
    deviceRiskScore: 71,
    dataResidencyRegion: "eu-west-1",
  };

  const result = await checkPaymentCompliance(tool, transaction);

  console.log({
    transactionId: transaction.transactionId,
    ...result,
    model: "gpt-4o-mini",
    timestamp: new Date().toISOString(),
  });
}

Production Considerations

  • Pin data residency

    • Keep policy indexes and transaction logs in-region if you process EU or UK payment data.
    • Do not send raw PANs or sensitive personal data into prompts; tokenize or redact first.
  • Add hard guardrails before the LLM

    • Reject obvious violations with deterministic rules first:
      • sanctioned country
      • blocked merchant category
      • amount above manual-review threshold
    • Use the agent for nuanced cases only.
  • Log every retrieval

    • Capture which policy chunks were returned by VectorStoreIndex.
    • This gives you an audit trail showing why the model made its decision.
  • Monitor drift

    • Track false blocks vs false allows by region and product line.
    • Policy changes happen often in payments; rebuild indexes whenever legal updates land.

Common Pitfalls

  1. Using the LLM as the first line of defense

    If you ask the model to decide before applying deterministic rules, you will get inconsistent outcomes on obvious cases. Put rule-based checks in front of the agent so it only handles ambiguous scenarios.

  2. Sending raw payment data into prompts

    PANs, account numbers, and full PII do not belong in model prompts unless they are masked or tokenized. Pass only what the compliance decision needs.

  3. Skipping auditability

    A plain text answer like “looks fine” is useless in production payments. Always store retrieved policy evidence, input metadata, model version, and final action so compliance can reconstruct the decision later.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides