How to Build a compliance checking Agent Using LangGraph in TypeScript for investment banking

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglanggraphtypescriptinvestment-banking

A compliance checking agent in investment banking reviews a trade, email, pitch deck, or client interaction against firm policy and regulatory rules before it moves forward. It matters because the failure mode is expensive: blocked trades, regulatory findings, audit gaps, and reputational damage.

Architecture

A production compliance agent for investment banking needs these components:

  • Input normalizer

    • Converts raw requests into a structured case object.
    • Extracts desk, product, jurisdiction, client type, timestamp, and source channel.
  • Policy retrieval layer

    • Pulls the relevant internal controls, restricted lists, suitability rules, and jurisdiction-specific requirements.
    • Keeps policy versioned so every decision can be traced to the exact rule set used.
  • Risk classifier

    • Tags the request by severity: clear pass, needs human review, or hard block.
    • Handles common banking scenarios like MNPI risk, sanctions exposure, market abuse signals, and communication approval.
  • Decision engine

    • Applies deterministic checks first.
    • Uses an LLM only for interpretation or summarization where rules are ambiguous.
  • Audit logger

    • Persists the input, rule references, model output, final decision, and reviewer identity.
    • This is non-negotiable in regulated environments.
  • Human escalation path

    • Routes uncertain cases to compliance officers.
    • Preserves context so reviewers do not need to reconstruct the case from scratch.

Implementation

1) Define the graph state and typed outputs

For compliance work, keep the state explicit. You want every node to read and write predictable fields so you can audit what happened later.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { z } from "zod";

const ComplianceVerdict = z.object({
  decision: z.enum(["approve", "review", "reject"]),
  reason: z.string(),
  ruleRefs: z.array(z.string()),
});

type ComplianceVerdict = z.infer<typeof ComplianceVerdict>;

const GraphState = Annotation.Root({
  request: Annotation<any>(),
  normalized: Annotation<any>(),
  policies: Annotation<string[]>(),
  verdict: Annotation<ComplianceVerdict | null>(),
  auditTrail: Annotation<string[]>(),
});

export type ComplianceState = typeof GraphState.State;

2) Add deterministic checks before any model call

In investment banking, hard rules should not depend on an LLM. If a trade hits a restricted list or a prohibited jurisdiction rule, reject it immediately.

const normalizeRequest = async (state: ComplianceState) => {
  const req = state.request;
  return {
    normalized: {
      desk: req.desk,
      product: req.product,
      clientType: req.clientType,
      jurisdiction: req.jurisdiction,
      source: req.source,
      amount: req.amount,
    },
    auditTrail: [...(state.auditTrail ?? []), "normalized request"],
  };
};

const loadPolicies = async (state: ComplianceState) => {
  const { desk, jurisdiction } = state.normalized;

  // Replace with DB / policy service lookup
  const policies = [
    `Desk policy for ${desk}`,
    `Jurisdiction policy for ${jurisdiction}`,
    "Restricted list screening",
    "MNPI and market abuse controls",
  ];

  return {
    policies,
    auditTrail: [...(state.auditTrail ?? []), `loaded policies for ${desk}/${jurisdiction}`],
  };
};

const hardBlockCheck = async (state: ComplianceState) => {
  const { product, jurisdiction } = state.normalized;

  if (jurisdiction === "IR" || product === "restricted_security") {
    return {
      verdict: {
        decision: "reject",
        reason: "Blocked by jurisdiction or restricted security rule",
        ruleRefs: ["JUR-001", "RSTR-014"],
      },
      auditTrail: [...(state.auditTrail ?? []), "hard block triggered"],
    };
  }

  return { auditTrail: [...(state.auditTrail ?? []), "hard block check passed"] };
};

3) Use an LLM only for ambiguous cases

If the deterministic layer does not reject the request, use an LLM to interpret policy text and produce a structured verdict. Keep the output schema tight so you can validate it before routing.

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0 });

const llmReview = async (state: ComplianceState) => {
  if (state.verdict?.decision === "reject") return {};

  const prompt = `
You are reviewing an investment banking compliance case.
Request:
${JSON.stringify(state.normalized)}

Policies:
${state.policies.join("\n")}

Return JSON with decision approve|review|reject, reason, ruleRefs.
`;

  const response = await llm.invoke(prompt);
  
const parsed = ComplianceVerdict.parse(JSON.parse(response.content as string));

return {
    verdict: parsed,
    auditTrail: [...(state.auditTrail ?? []), "llm review completed"],
};
};

4) Wire the graph and execute it

This is the actual LangGraph pattern you want in production. Deterministic nodes run first; the model only runs if needed; every step appends to an audit trail.

const graph = new StateGraph(GraphState)
 .addNode("normalizeRequest", normalizeRequest)
 .addNode("loadPolicies", loadPolicies)
 .addNode("hardBlockCheck", hardBlockCheck)
 .addNode("llmReview", llmReview)
 .addEdge(START, "normalizeRequest")
 .addEdge("normalizeRequest", "loadPolicies")
 .addEdge("loadPolicies", "hardBlockCheck")
 .addEdge("hardBlockCheck", "llmReview")
 .addEdge("llmReview", END)
 .compile();

async function runComplianceCase(request: any) {
 const result = await graph.invoke({
   request,
   normalized: null,
   policies: [],
   verdict: null,
   auditTrail: [],
 });

 return result.verdict;
}

Production Considerations

  • Deploy in-region

    • For banking workloads, keep processing and storage inside approved regions.
    • If your firm has data residency requirements in EMEA or APAC, do not send raw client content across borders.
  • Log everything needed for audit

    • Persist input hashes, policy versions, model version, timestamps, final verdicts, and reviewer overrides.
    • Regulators care about reproducibility more than clever prompts.
  • Add guardrails around model usage

    • Use deterministic blocks for sanctions screening, restricted lists, and jurisdictional prohibitions.

Do not let the LLM overrule fixed controls.

  • Monitor false positives and escalation rates

Track how often cases are rejected versus sent to humans. A compliance agent that floods reviewers with noise gets turned off fast.

Common Pitfalls

  1. Letting the LLM make final decisions on hard rules

    • Fix this by putting sanctions checks, restricted list checks, and residency rules in deterministic code before any model call.
  2. Not versioning policy inputs

    • If you cannot prove which policy file was used at decision time, your audit trail is weak.
    • Store policy IDs and hashes alongside each verdict.
  3. Returning unstructured model output

    • Free-form text is hard to validate and impossible to route reliably.
    • Force structured JSON with zod validation before writing anything to your case store.
  4. Ignoring human review workflow

    • Some cases will always be ambiguous.
    • Build a clean escalation path with full context so compliance officers can approve or reject without rework.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides