How to Build a compliance checking Agent Using LangChain in TypeScript for banking
A compliance checking agent reviews banking content, customer requests, or internal workflows against policy before anything is sent downstream. It matters because a bad approval path can create regulatory exposure, audit gaps, and preventable operational risk.
Architecture
- •
Policy source
- •A versioned set of rules for KYC, AML, sanctions, disclosures, marketing language, and data handling.
- •Keep it outside the prompt so compliance can update rules without code changes.
- •
Document loader
- •Pulls emails, chat transcripts, loan notes, or product copy from your internal systems.
- •In banking, this layer must respect data residency and access controls.
- •
LangChain LLM chain
- •Uses
ChatOpenAIplus a structured output schema to classify issues consistently. - •The model should return findings in a machine-readable format for audit trails.
- •Uses
- •
Decision engine
- •Converts model output into
approve,reject, orneeds_review. - •This should be deterministic and conservative.
- •Converts model output into
- •
Audit logger
- •Persists input hash, policy version, model response, timestamps, and reviewer action.
- •You need this for explainability and regulator review.
- •
Human escalation path
- •Routes ambiguous cases to compliance staff.
- •Never let the agent auto-approve high-risk items without a fallback.
Implementation
1) Define the compliance schema and policy input
Start with a strict output contract. For banking use cases, you want structured findings rather than free-form prose.
import { z } from "zod";
export const ComplianceFindingSchema = z.object({
decision: z.enum(["approve", "reject", "needs_review"]),
riskLevel: z.enum(["low", "medium", "high"]),
reasons: z.array(z.string()).min(1),
policyRefs: z.array(z.string()).default([]),
});
export type ComplianceFinding = z.infer<typeof ComplianceFindingSchema>;
export const BankingPolicy = `
You are reviewing banking content for compliance.
Check for:
- misleading claims about rates or returns
- missing disclosures
- prohibited advice
- sanctions/AML red flags
- collection or storage of sensitive personal data
Return only structured findings.
`;
This schema gives you predictable outputs and makes downstream routing simple. In practice, your policy text should come from a versioned store like S3, Git-backed config, or an internal policy service.
2) Build the LangChain checker with structured output
Use ChatOpenAI with withStructuredOutput() so the model returns validated JSON through Zod. That is much safer than parsing raw text in production.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { BankingPolicy, ComplianceFindingSchema } from "./schema";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const prompt = PromptTemplate.fromTemplate(`
{policy}
Review the following banking content:
Content:
{content}
Context:
{context}
`);
const structuredModel = llm.withStructuredOutput(ComplianceFindingSchema);
export async function checkCompliance(content: string, context: string) {
const renderedPrompt = await prompt.format({
policy: BankingPolicy,
content,
context,
});
const result = await structuredModel.invoke(renderedPrompt);
return result;
}
This pattern keeps your output constrained and auditable. Note that temperature: 0 matters here because you want stable decisions for the same input.
3) Add deterministic routing and audit logging
The LLM should not be your final control point. Use it as an analysis layer, then apply business rules before any action is taken.
import crypto from "crypto";
import fs from "fs/promises";
import { checkCompliance } from "./checker";
function hashInput(input: string) {
return crypto.createHash("sha256").update(input).digest("hex");
}
export async function runComplianceCheck(content: string) {
const context = "Retail banking customer communication";
const finding = await checkCompliance(content, context);
const finalDecision =
finding.decision === "reject" || finding.riskLevel === "high"
? "needs_review"
: finding.decision;
const auditRecord = {
timestamp: new Date().toISOString(),
inputHash: hashInput(content),
context,
modelDecision: finding.decision,
finalDecision,
reasons: finding.reasons,
policyRefs: finding.policyRefs,
riskLevel: finding.riskLevel,
policyVersion: "2026-04-01",
};
await fs.appendFile("compliance-audit.log", JSON.stringify(auditRecord) + "\n");
return auditRecord;
}
This is where banking controls matter. Even if the model says approve, your wrapper can still force needs_review for high-risk outputs like lending promises or customer-facing financial advice.
4) Wire it into an API endpoint or workflow
The agent usually sits between a user action and the system that publishes or stores the content. Keep the interface small and explicit.
import express from "express";
import { runComplianceCheck } from "./runner";
const app = express();
app.use(express.json());
app.post("/compliance/check", async (req, res) => {
try {
const { content } = req.body;
if (!content || typeof content !== "string") {
return res.status(400).json({ error: "content is required" });
}
const result = await runComplianceCheck(content);
res.json(result);
} catch (error) {
res.status(500).json({ error: "compliance_check_failed" });
}
});
app.listen(3000);
In a real bank, this endpoint would sit behind authN/authZ controls and probably inside a private network segment. If you process regulated data, make sure your model provider setup matches your residency requirements.
Production Considerations
- •
Deployment
- •Run the agent in a private VPC with outbound restrictions.
- •If data residency is required, pin processing to approved regions and avoid sending sensitive fields unnecessarily.
- •
Monitoring
- •Track false positives, false negatives, escalation rate, and reviewer override rate.
- •Log prompt version, policy version, model name, latency, and token usage for every decision.
- •
Guardrails
- •Use strict schemas with
withStructuredOutput()or equivalent validation. - •Add rule-based blocks for known hard failures like sanctions keywords, missing disclaimers, or prohibited product claims.
- •Use strict schemas with
- •
Auditability
- •Store immutable records of input hashes and decision outputs.
- •Make sure reviewers can reconstruct why a case was escalated without exposing unnecessary PII.
Common Pitfalls
- •
Treating the LLM as the final authority
- •Avoid this by wrapping every model decision in deterministic business rules.
- •High-risk cases should default to human review.
- •
Letting policies live only in prompts
- •Prompts drift. Policies need versioning outside application code.
- •Store them in config or a dedicated policy service so compliance can update them independently.
- •
Ignoring PII minimization
- •Don’t send full customer records when only a transaction summary is needed.
- •Redact names, account numbers, addresses, and other sensitive fields before calling the model unless they are required for review.
- •
Skipping audit metadata
- •A plain “approved” response is not enough for banking.
- •Log policy version, model version if available through your stack traceability layer, request hash, and reviewer outcome so you can defend decisions later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit