How to Build a compliance checking Agent Using LangGraph in TypeScript for retail banking

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglanggraphtypescriptretail-banking

A compliance checking agent for retail banking reviews customer-facing text, transactions, and case notes against policy rules before anything is sent to a human reviewer or downstream system. It matters because small mistakes in disclosures, suitability language, sanctions handling, or data retention can create regulatory exposure fast.

Architecture

  • Input normalizer

    • Takes raw text from chat, email, CRM notes, or application fields.
    • Strips irrelevant formatting and maps the payload into a stable schema.
  • Policy retrieval layer

    • Pulls the relevant banking policy snippets for the jurisdiction, product type, and channel.
    • Keeps the agent from checking mortgage disclosures with credit-card rules.
  • Compliance classifier

    • Uses an LLM to detect likely violations such as missing disclosures, misleading claims, PII leakage, or prohibited language.
    • Returns structured findings, not free-form prose.
  • Rules engine

    • Applies deterministic checks for hard requirements like mandatory phrases, age restrictions, KYC flags, and escalation thresholds.
    • This is where you encode non-negotiable bank policy.
  • Decision router

    • Sends low-risk items to approve, medium-risk items to manual review, and high-risk items to block.
    • Keeps the final decision explainable.
  • Audit logger

    • Persists input hash, retrieved policy IDs, model output, and final decision.
    • Required for internal audit and regulator traceability.

Implementation

1) Define the state and node contracts

Use LangGraph’s StateGraph with a typed state object. For retail banking, keep the state explicit so every step is auditable.

import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const ComplianceFindingSchema = z.object({
  severity: z.enum(["low", "medium", "high"]),
  category: z.enum(["disclosure", "pii", "fair_lending", "sanctions", "misleading_claim"]),
  issue: z.string(),
  recommendation: z.string(),
});

type ComplianceFinding = z.infer<typeof ComplianceFindingSchema>;

type ComplianceState = {
  inputText: string;
  jurisdiction: string;
  productType: string;
  policySnippets: string[];
  findings: ComplianceFinding[];
  decision?: "approve" | "manual_review" | "block";
};

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

2) Add retrieval and compliance analysis nodes

For a production bank workflow, retrieval should be deterministic over an approved policy store. The LLM should only interpret retrieved policy; it should not invent policy.

async function retrievePolicies(state: ComplianceState): Promise<Partial<ComplianceState>> {
  const policiesByProduct: Record<string, string[]> = {
    checking: [
      "Do not promise fee waivers unless explicitly approved.",
      "Disclose overdraft terms when discussing account opening.",
      "Never request full card PAN in chat.",
    ],
    personal_loan: [
      "Do not make approval guarantees.",
      "Include representative APR disclaimer when quoting rates.",
      "Escalate any fair lending concern.",
    ],
  };

  return {
    policySnippets: policiesByProduct[state.productType] ?? [],
  };
}

async function analyzeCompliance(state: ComplianceState): Promise<Partial<ComplianceState>> {
  const prompt = `
You are a retail banking compliance checker.
Jurisdiction: ${state.jurisdiction}
Product: ${state.productType}

Policy:
${state.policySnippets.map((p) => `- ${p}`).join("\n")}

Customer text:
${state.inputText}

Return JSON array of findings with fields:
severity, category, issue, recommendation
Only report issues supported by the policy or obvious PII leakage.
`;

  const response = await llm.invoke(prompt);
  const parsed = JSON.parse(response.content as string);

  return {
    findings: z.array(ComplianceFindingSchema).parse(parsed),
  };
}

3) Apply deterministic routing and build the graph

This is where LangGraph earns its keep. Use addConditionalEdges to route based on severity instead of forcing everything through one path.

function routeDecision(state: ComplianceState): string {
  if (state.findings.some((f) => f.severity === "high")) return "block";
  if (state.findings.some((f) => f.severity === "medium")) return "manual_review";
  return "approve";
}

async function setDecision(state: ComplianceState): Promise<Partial<ComplianceState>> {
  return { decision: routeDecision(state) };
}

const graph = new StateGraph<ComplianceState>()
  .addNode("retrievePolicies", retrievePolicies)
  .addNode("analyzeCompliance", analyzeCompliance)
  .addNode("setDecision", setDecision)
  
graph.addEdge(START, "retrievePolicies");
graph.addEdge("retrievePolicies", "analyzeCompliance");
graph.addEdge("analyzeCompliance", "setDecision");
graph.addConditionalEdges("setDecision", (state) => state.decision!, {
  approve: END,
});
graph.addConditionalEdges("setDecision", (state) => state.decision!, {
});

The clean pattern here is to branch after setDecision into downstream handlers like approve, manual_review, and block. In a real system you would add separate nodes for each outcome and then end each path explicitly.

4) Compile and run with an audit-friendly wrapper

Keep the graph execution wrapped so you can log every request with a correlation ID. That gives you traceability for complaints and model reviews.

const app = graph.compile();

async function runComplianceCheck() {
const result = await app.invoke({
    inputText:
      "We can guarantee your personal loan will be approved if you apply today. Also send your full card number here.",
    jurisdiction: "US",
    productType: "personal_loan",
    policySnippets: [],
    findings: [],
});
  
console.log({
    decision: result.decision,
    findings: result.findings,
});
}

runComplianceCheck();

Production Considerations

  • Data residency

    • Keep customer data in-region if your bank operates under local residency rules.
    • If policy text can leave region but customer data cannot, split retrieval from inference carefully.
  • Audit logging

    • Store prompt version, policy document IDs, model version, decision outcome, and timestamp.
    • Hash sensitive inputs instead of logging raw PII when possible.
  • Guardrails

    • Add hard regex checks for PANs, account numbers, SSNs/NINs before any LLM call.
    • Block outbound responses that contain prohibited advice or unsupported approval language.
  • Monitoring

    • Track false positives by product line and jurisdiction.
    • Alert on spikes in manual review rates; that usually means a broken prompt or changed policy source.

Common Pitfalls

  1. Letting the model invent policy

    • Fix this by grounding every check in retrieved policy snippets only.
    • If a rule is not in your approved knowledge base or deterministic ruleset, do not enforce it through generation.
  2. Using one generic workflow for all products

    • Checking deposits, cards, unsecured lending, and mortgages with the same rules causes bad decisions.
    • Split by product type early in the graph so each path has its own policy bundle and thresholds.
  3. Skipping deterministic checks for regulated fields

    • An LLM should not be your only defense against PII leakage or mandatory disclosure misses.
    • Run regexes and rule checks before the graph returns an approval.
  4. Ignoring human review thresholds

    • Retail banking compliance is not binary.
    • Route medium-confidence cases to analysts; reserve auto-blocks for clear violations like sanctions hits or explicit PII exposure.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides