How to Build a compliance checking Agent Using LangChain in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglangchaintypescriptpayments

A compliance checking agent for payments reviews a transaction, customer context, and policy rules before money moves. Its job is to catch sanctions hits, KYC gaps, suspicious merchant categories, and data residency violations early enough to block or route the payment for review.

Architecture

  • Input adapter

    • Accepts payment events from your app, queue, or API.
    • Normalizes fields like amount, currency, senderCountry, beneficiaryCountry, merchantCategoryCode, and customerId.
  • Policy retrieval layer

    • Pulls the latest compliance rules from a controlled source.
    • In practice this is often a database table, a document store, or a versioned policy file.
  • LLM decision chain

    • Uses LangChain to classify risk and explain the reason.
    • Should return structured output, not free-form prose.
  • Audit logger

    • Stores the input, model output, policy version, timestamp, and final decision.
    • This is mandatory if you need traceability for internal audit or regulators.
  • Human review router

    • Sends borderline cases to an analyst queue.
    • Keeps the agent from making irreversible decisions on ambiguous cases.
  • Guardrail layer

    • Enforces hard rules outside the LLM.
    • Example: block sanctioned jurisdictions or missing customer identity before any model call.

Implementation

1) Install dependencies and define the compliance schema

You want the model to emit a strict decision object. In LangChain TypeScript, use zod with withStructuredOutput so downstream code can trust the shape.

npm install langchain @langchain/openai zod
import { z } from "zod";

export const PaymentComplianceInputSchema = z.object({
  paymentId: z.string(),
  amount: z.number().positive(),
  currency: z.string().length(3),
  senderCountry: z.string(),
  beneficiaryCountry: z.string(),
  merchantCategoryCode: z.string().optional(),
  customerKycStatus: z.enum(["verified", "pending", "failed"]),
  sanctionsScreeningStatus: z.enum(["clear", "match", "unknown"]),
  residencyRegion: z.string(), // e.g. "eu-west-1"
});

export const ComplianceDecisionSchema = z.object({
  decision: z.enum(["approve", "review", "block"]),
  reasons: z.array(z.string()),
  policyReferences: z.array(z.string()),
});

2) Build the LangChain chain with structured output

Use ChatOpenAI and a prompt that forces the model to act like a payments compliance reviewer. Keep the instructions narrow. This is not a general assistant; it should only evaluate against policy facts you provide.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { PaymentComplianceInputSchema, ComplianceDecisionSchema } from "./schemas.js";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    [
      "You are a payments compliance reviewer.",
      "Return only decisions based on the provided facts.",
      "If sanctions screening is 'match', block.",
      "If KYC is not verified, route to review unless policy says otherwise.",
      "If residencyRegion does not match required processing region, flag it.",
      "Be concise and cite policy references as short strings.",
    ].join(" "),
  ],
  ["human", "{input}"],
]);

const complianceChain = prompt.pipe(llm.withStructuredOutput(ComplianceDecisionSchema));

3) Add hard guards before calling the model

Do not outsource deterministic checks to an LLM. Sanctions matches, invalid payloads, and residency violations should be handled in code first. That gives you predictable behavior and better auditability.

import { PaymentComplianceInputSchema } from "./schemas.js";

type PaymentComplianceInput = {
  paymentId: string;
  amount: number;
  currency: string;
};

export function hardBlockChecks(rawInput: unknown) {
  const input = PaymentComplianceInputSchema.parse(rawInput);

  if (input.sanctionsScreeningStatus === "match") {
    return {
      decision: "block" as const,
      reasons: ["Sanctions screening returned a match"],
      policyReferences: ["SANCTIONS-001"],
    };
    }

  if (input.customerKycStatus === "failed") {
    return {
      decision: "block" as const,
      reasons: ["Customer KYC failed"],
      policyReferences: ["KYC-FAIL-002"],
    };
  }

  return input;
}

4) Run the chain and persist an audit record

The agent should always write an immutable audit trail. Store both the raw input and the final decision with timestamps and policy versions so reviewers can reconstruct what happened later.

import { complianceChain } from "./chain.js";

async function main() {
  const rawPayment = {
    paymentId: "pay_123",
    amount: 2500,
    currency: "USD",
    senderCountry: "GB",
    beneficiaryCountry: "AE",
    merchantCategoryCode: "4829",
    customerKycStatus: "verified",
    sanctionsScreeningStatus: "clear",
    residencyRegion: "eu-west-1",
    requiredProcessingRegion: "eu-west-1",
    policyVersion: "2026.01",
    notes:
      "High-value cross-border transfer to a regulated merchant category.",
  };

  
}

A complete execution pattern looks like this:

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const ComplianceDecisionSchema = z.object({
  decision: z.enum(["approve", "review", "block"]),
});
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a payments compliance reviewer."],
]);
const chain = prompt.pipe(llm.withStructuredOutput(ComplianceDecisionSchema));

async function evaluatePayment(payment: unknown) {
  
}

Production Considerations

  • Keep deterministic controls outside the model

    • Sanctions hits, blocked countries, invalid KYC states, and residency mismatches should be code paths.
    • Use the LLM for judgment calls and explanation generation, not for hard enforcement.
  • Version policies aggressively

    • Store policyVersion with every decision.
    • If regulators ask why a payment was blocked six months ago, you need to reproduce the exact rule set.
  • Log for audit, not just observability

    • Capture input hashes, full structured output, model name, latency, and correlation IDs.
    • Redact PII where possible while keeping enough context for investigations.
  • Control data residency

    • If your payment data must stay in-region, make sure your LLM endpoint and vector store do too.
    • For EU payments data, avoid sending customer identifiers to non-compliant regions.

Common Pitfalls

  1. Letting the LLM make binary compliance decisions without guardrails

    • Fix it by enforcing sanctions/KYC/residency checks in application code before any model call.
    • Reserve the agent for triage and explanation on gray-area cases.
  2. Using free-form text outputs

    • Fix it by using withStructuredOutput plus zod.
    • Free-form completions are brittle in production and painful for downstream automation.
  3. Ignoring audit requirements

    • Fix it by storing every decision with policy versioning and request metadata.
    • If you cannot explain why a payment was reviewed or blocked, you do not have a compliant system.
  4. Sending sensitive payment data to the wrong region

    • Fix it by pinning inference infrastructure to approved regions and minimizing payloads sent to the model.
    • Pass only fields needed for compliance evaluation; mask account numbers and personal identifiers when possible.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides