How to Build a compliance checking Agent Using LangGraph in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglanggraphtypescriptfintech

A compliance checking agent reviews a customer request, transaction, or support case against policy rules before anything sensitive moves forward. In fintech, that matters because a bad decision is not just a UX bug; it can trigger AML exposure, sanctions violations, audit findings, and regulatory headaches.

Architecture

Build this agent with a small number of deterministic components:

  • Input normalizer

    • Turns raw request payloads into a structured compliance task.
    • Extracts fields like customer country, transaction amount, product type, and risk flags.
  • Policy retrieval layer

    • Pulls the right rule set for the jurisdiction and product.
    • Example: KYC thresholds for retail onboarding in the UK are not the same as wire transfer checks in the US.
  • Rule evaluation node

    • Runs deterministic checks against policy data.
    • This should be code, not an LLM prompt.
  • Exception reviewer

    • Uses an LLM only for borderline cases or narrative summaries.
    • Keeps human-readable reasoning separate from hard enforcement.
  • Audit trail writer

    • Stores inputs, rule outcomes, model outputs, and final decision.
    • Required for internal audit and regulator review.
  • Decision gate

    • Returns approve, reject, or escalate.
    • This is where your application enforces the outcome.

Implementation

1) Define the graph state and compliance checks

Use LangGraph’s StateGraph to model the flow. Keep the state explicit so every decision is auditable.

import { StateGraph, Annotation, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

type ComplianceDecision = "approve" | "reject" | "escalate";

const ComplianceState = Annotation.Root({
  request: Annotation<any>(),
  policy: Annotation<any>(),
  findings: Annotation<string[]>(),
  decision: Annotation<ComplianceDecision>(),
  rationale: Annotation<string>(),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

function loadPolicy(request: any) {
  if (request.country === "US" && request.amount > 10000) {
    return { requiresEnhancedDueDiligence: true, sanctionsScreening: true };
  }
  return { requiresEnhancedDueDiligence: false, sanctionsScreening: true };
}

async function evaluateRules(state: typeof ComplianceState.State) {
  const findings: string[] = [];

  if (!state.policy.sanctionsScreening) {
    findings.push("Sanctions screening missing");
    return { findings, decision: "reject" as const };
  }

  if (state.request.nameMatch === "possible_sanctions_hit") {
    findings.push("Possible sanctions match");
    return { findings, decision: "escalate" as const };
  }

  if (state.policy.requiresEnhancedDueDiligence && !state.request.eddCompleted) {
    findings.push("EDD required but not completed");
    return { findings, decision: "reject" as const };
  }

  findings.push("All mandatory checks passed");
  return { findings, decision: "approve" as const };
}

2) Add nodes for policy loading and exception review

The rule engine stays deterministic. The LLM only helps explain borderline cases to analysts or produce a concise case note.

async function loadPolicyNode(state: typeof ComplianceState.State) {
  return { policy: loadPolicy(state.request) };
}

async function reviewExceptionNode(state: typeof ComplianceState.State) {
  const prompt = `
You are a fintech compliance analyst.
Summarize why this case needs escalation or approval.
Request:
${JSON.stringify(state.request)}
Findings:
${JSON.stringify(state.findings)}
Decision:
${state.decision}
`;

  const response = await llm.invoke(prompt);
  return { rationale: response.content.toString() };
}

3) Wire the graph with conditional routing

This is the core pattern. Route approved cases to final output, escalate risky ones for review, and reject clear violations immediately.

const graph = new StateGraph(ComplianceState)
  .addNode("loadPolicy", loadPolicyNode)
  .addNode("evaluateRules", evaluateRules)
  .addNode("reviewException", reviewExceptionNode)
  .addEdge(START, "loadPolicy")
  .addEdge("loadPolicy", "evaluateRules")
  .addConditionalEdges("evaluateRules", (state) => state.decision, {
    approve: END,
    reject: END,
    escalate: "reviewException",
  })
  .addEdge("reviewException", END)
  .compile();

async function run() {
  const result = await graph.invoke({
    request: {
      customerId: "cust_123",
      country: "US",
      amount: 25000,
      nameMatch: "clear",
      eddCompleted: false,
    },
    policy: null,
    findings: [],
    decision: "escalate",
    rationale: "",
  });

console.log(result);
}

run();

Why this pattern works

ConcernDeterministic nodeLLM node
Sanctions / AML enforcementYesNo
Policy interpretationYesNo
Case summarizationNoYes
AuditabilityHighMedium
Regulatory defensibilityHighLower unless constrained

Production Considerations

  • Keep enforcement code outside the model

    The agent can explain decisions, but it should not invent them. For fintech compliance, hard stops like sanctions hits and missing EDD must be enforced in TypeScript before any downstream action.

  • Persist full audit traces

    Store input payloads, resolved policy version, node outputs, timestamps, and final decisions. Regulators care about what you knew at decision time, not what your model later thinks it meant.

  • Respect data residency

    If customer data must stay in-region, deploy the graph runtime and any vector store or logging pipeline inside that boundary. Don’t send PII or account data to external services unless your legal team has approved the transfer path.

  • Add human override paths

    Escalation should create a review task with all evidence attached. In regulated workflows, “model says no” is not enough; analysts need to override with justification and that override must be logged.

Common Pitfalls

  1. Using the LLM as the compliance engine

    • Mistake: asking the model to decide whether a payment is allowed.
    • Fix: use deterministic rules for mandatory checks and reserve the model for summaries or ambiguous edge cases.
  2. Not versioning policies

    • Mistake: loading “current rules” without storing which version was used.
    • Fix: attach policyVersion to state and persist it with every run so audits can reproduce the exact decision path.
  3. Ignoring escalation thresholds

    • Mistake: treating every borderline case like a normal approval flow.
    • Fix: define explicit thresholds for sanctions matches, high-risk geographies, unusual amounts, and missing KYC/EDD fields. Route those cases to human review by default.

A compliance agent in fintech is only useful if it is boringly predictable. LangGraph gives you that control by making each step explicit, traceable, and easy to audit under pressure.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides