How to Build a compliance checking Agent Using LangChain in TypeScript for investment banking
A compliance checking agent for investment banking reviews client communications, trade instructions, pitch materials, and internal notes against policy. It matters because one missed restriction can create regulatory exposure, failed audits, or a blocked deal.
Architecture
- •
Input layer
- •Accepts text from emails, chat transcripts, trade requests, or draft documents.
- •Normalizes the payload into a consistent schema with metadata like desk, region, client type, and timestamp.
- •
Policy retrieval layer
- •Pulls relevant compliance rules from a vector store or document store.
- •Uses
Retriever-backed context so the agent checks against the right jurisdiction and product class.
- •
LLM reasoning layer
- •Uses
ChatOpenAIthrough LangChain to classify risk, identify violations, and explain why a statement is non-compliant. - •Produces structured output so downstream systems can route cases deterministically.
- •Uses
- •
Decision layer
- •Converts model output into
approve,reject, orescalate. - •Applies hard rules for restricted terms, MNPI references, suitability issues, and jurisdiction-specific constraints.
- •Converts model output into
- •
Audit logging layer
- •Stores input, retrieved policy snippets, model output, timestamps, and final decision.
- •Gives compliance teams a traceable record for review and regulator queries.
Implementation
1. Install the LangChain packages you actually need
For TypeScript, keep the dependency surface small. You want the model wrapper, prompt utilities, structured output support, and a retriever if you are grounding against policy docs.
npm install langchain @langchain/openai @langchain/core zod
Set your environment variables:
export OPENAI_API_KEY="..."
2. Define a strict compliance result schema
Do not let the model free-write decisions. In investment banking, you need predictable outputs that map cleanly to workflow states and audit logs.
import { z } from "zod";
export const ComplianceResultSchema = z.object({
decision: z.enum(["approve", "reject", "escalate"]),
riskLevel: z.enum(["low", "medium", "high"]),
reasons: z.array(z.string()).min(1),
citedPolicies: z.array(z.string()),
});
export type ComplianceResult = z.infer<typeof ComplianceResultSchema>;
This schema becomes your contract with downstream systems. If the model cannot produce valid JSON-like structure, fail closed and escalate.
3. Build the LangChain chain with retrieval + structured output
The pattern below uses ChatOpenAI, PromptTemplate, and RunnableSequence. The retriever is mocked here as an interface so you can plug in a vector store backed by approved policy documents later.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StructuredOutputParser } from "langchain/output_parsers";
import { ComplianceResultSchema } from "./schema";
type ComplianceInput = {
text: string;
desk: string;
region: string;
clientType: "institutional" | "retail" | "sovereign";
};
async function retrievePolicyContext(input: ComplianceInput): Promise<string> {
// Replace with a real retriever backed by approved policy docs.
return [
"Policy A: Do not promise guaranteed returns.",
"Policy B: Escalate any mention of MNPI or unpublished deal terms.",
"Policy C: Marketing materials must include required risk disclosures.",
`Region rule: ${input.region} data must remain within approved residency boundaries.`,
].join("\n");
}
const parser = StructuredOutputParser.fromZodSchema(ComplianceResultSchema);
const prompt = PromptTemplate.fromTemplate(`
You are a compliance checker for investment banking.
Assess the message against the provided policy context only.
Desk: {desk}
Region: {region}
Client Type: {clientType}
Message:
{text}
Policy Context:
{policyContext}
Return only valid structured output matching these instructions:
{format_instructions}
`);
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
export async function checkCompliance(input: ComplianceInput) {
const policyContext = await retrievePolicyContext(input);
const chain = RunnableSequence.from([
async (x: ComplianceInput) => ({
...x,
policyContext,
format_instructions: parser.getFormatInstructions(),
}),
prompt,
model,
parser,
]);
return chain.invoke(input);
}
A few things matter here:
- •
temperature: 0keeps outputs stable. - •The prompt explicitly says “policy context only” to reduce hallucinated policy references.
- •
StructuredOutputParser.fromZodSchema()gives you typed validation before anything reaches production workflows.
4. Add deterministic escalation logic around the LLM
In banking compliance, the LLM should assist judgment, not own it. Put hard rules around sensitive phrases and route ambiguous cases to humans.
const hardBlockPatterns = [
/guaranteed returns/i,
/inside information/i,
/MNPI/i,
];
export async function decideCompliance(inputText: string) {
if (hardBlockPatterns.some((pattern) => pattern.test(inputText))) {
return {
decision: "escalate",
riskLevel: "high",
reasons: ["Hard-blocked term detected"],
citedPolicies: ["Internal restricted language policy"],
};
}
const result = await checkCompliance({
text: inputText,
desk: "IBD",
region: "UK",
clientType: "institutional",
});
if (result.decision === "reject") return result;
return result;
}
This gives you a clear control plane:
- •regex or rule-based blockers for obvious violations
- •LLM reasoning for contextual judgment
- •human escalation for anything borderline
Production Considerations
- •
Auditability
- •Persist raw input, retrieved policies, final decision, model version, prompt version, and timestamp.
- •Regulators will care about why a message was flagged as much as whether it was flagged.
- •
Data residency
- •Route EU client data to EU-hosted infrastructure where required.
- •Avoid sending confidential deal content across regions unless your legal and security teams have approved that flow.
- •
Guardrails
- •Use allowlisted tools only; this agent should not have free-form access to trading systems or external web search.
- •Add hard filters for MNPI terms, sanctions references, bribery language, misleading performance claims, and unsuitable product language.
- •
Monitoring
- •Track false positives by desk and region.
- •Monitor drift when policies change after new regulations or internal guidance updates.
Common Pitfalls
- •
Using the LLM as the final authority
- •Bad pattern: letting the model approve sensitive content without deterministic checks.
- •Fix it by combining rule-based blocking with LLM-assisted review and mandatory escalation thresholds.
- •
Not grounding on current policy
- •Bad pattern: asking the model to “know compliance.”
- •Fix it by retrieving approved policy docs per desk, region, and product type before every decision.
- •
Ignoring audit requirements
- •Bad pattern: logging only the final verdict.
- •Fix it by storing full evidence trails:
- •input text hash or encrypted payload
- •retrieved policy snippets
- •model response
- •parser validation status
- •human override if applied
If you build this way, LangChain becomes orchestration glue rather than magic. That is what you want in investment banking: controlled behavior, traceable decisions, and enough structure that compliance can sign off on it.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit