How to Build a compliance checking Agent Using LangChain in TypeScript for retail banking
A compliance checking agent for retail banking reviews customer-facing text, internal policy drafts, or case notes against regulatory rules before anything goes live. It matters because small wording mistakes can trigger conduct risk, mis-selling claims, or audit findings, especially when teams are moving fast across marketing, product, and operations.
Architecture
- •
Input normalization layer
- •Cleans and structures raw text from chat transcripts, email drafts, product copy, or CRM notes.
- •Extracts metadata like jurisdiction, product type, customer segment, and channel.
- •
Policy retrieval layer
- •Pulls the relevant compliance controls from a curated knowledge base.
- •Uses vector search over approved policy docs, regulatory guidance, and internal procedures.
- •
LLM reasoning layer
- •Evaluates the content against retrieved rules.
- •Produces a structured decision: pass, fail, or needs human review.
- •
Audit logging layer
- •Stores input, retrieved policy references, model output, timestamps, and reviewer actions.
- •Needed for traceability during compliance reviews and model governance.
- •
Guardrail layer
- •Blocks unsupported claims, PII leakage, and out-of-scope decisions.
- •Enforces deterministic checks before the LLM sees the text.
- •
Human escalation path
- •Routes ambiguous cases to compliance analysts.
- •Prevents the agent from making final decisions on high-risk content.
Implementation
1) Install dependencies and define the data shape
Use LangChain with a chat model and a vector store for policy retrieval. In TypeScript, keep the output schema strict so the agent returns machine-readable compliance decisions.
npm install langchain @langchain/openai @langchain/community zod
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const ComplianceDecisionSchema = z.object({
status: z.enum(["pass", "fail", "review"]),
reasons: z.array(z.string()),
policyReferences: z.array(z.string()),
});
type ComplianceDecision = z.infer<typeof ComplianceDecisionSchema>;
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
2) Load banking policies into a retriever
For production you would chunk approved policies by section and jurisdiction. Here the important part is using VectorStoreRetriever through LangChain’s retrieval APIs so the model answers from bank-approved material instead of free-form memory.
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "@langchain/core/documents";
const policyDocs = [
new Document({
pageContent:
"UK retail banking: do not imply guaranteed approval for credit products. All eligibility statements must be qualified.",
metadata: { id: "UK-CREDIT-001", jurisdiction: "UK" },
}),
new Document({
pageContent:
"Never request full card PAN or CVV in customer support messages. Redirect to secure channels.",
metadata: { id: "PCI-001", jurisdiction: "GLOBAL" },
}),
];
const vectorStore = await MemoryVectorStore.fromDocuments(
policyDocs,
new OpenAIEmbeddings()
);
const retriever = vectorStore.asRetriever(4);
3) Build the compliance chain
This pattern uses retrieval plus structured output. The prompt tells the model to classify based only on retrieved policy snippets and to escalate when evidence is incomplete.
import { RunnablePassthrough } from "@langchain/core/runnables";
const prompt = PromptTemplate.fromTemplate(`
You are a retail banking compliance checker.
Use only the policy context below.
If the text is ambiguous or missing jurisdictional detail, return review.
Policy context:
{context}
Customer or staff text:
{text}
Return JSON with:
status: pass | fail | review
reasons: string[]
policyReferences: string[]
`);
const formatDocs = (docs: Array<{ pageContent: string; metadata?: any }>) =>
docs.map((d) => `[${d.metadata?.id ?? "UNKNOWN"}] ${d.pageContent}`).join("\n");
const chain = RunnablePassthrough.assign({
context: async (input: { text: string }) => {
const docs = await retriever.getRelevantDocuments(input.text);
return formatDocs(docs);
},
})
.pipe(prompt)
.pipe(llm)
.pipe(new StringOutputParser());
export async function checkCompliance(text: string): Promise<ComplianceDecision> {
const raw = await chain.invoke({ text });
const parsed = ComplianceDecisionSchema.parse(JSON.parse(raw));
return parsed;
}
4) Add pre-checks and escalation logic
Before calling the LLM, run deterministic filters for obvious violations like PAN/CVV leakage or disallowed claims. In retail banking this reduces cost and prevents sensitive data from entering the prompt.
function preCheck(text: string): { blocked: boolean; reason?: string } {
const cardPattern = /\b(?:\d[ -]*?){13,19}\b/;
const cvvPattern = /\bCVV\b|\bCVC\b|\bsecurity code\b/i;
if (cardPattern.test(text) || cvvPattern.test(text)) {
return {
blocked: true,
reason: "Potential payment card data detected. Route to secure handling flow.",
};
}
return { blocked: false };
}
export async function evaluateText(text: string) {
const pre = preCheck(text);
if (pre.blocked) {
return {
status: "fail",
reasons: [pre.reason!],
policyReferences: ["PCI-001"],
} satisfies ComplianceDecision;
}
return checkCompliance(text);
}
Production Considerations
- •
Data residency
- •Keep embeddings, logs, and prompts in-region for your operating jurisdiction.
- •If you process UK customer content, don’t route it through a non-approved region just because the model endpoint is cheaper.
- •
Auditability
- •Store input text hash, retrieved document IDs, final decision, model version, and timestamp.
- •Regulators care about why a decision was made, not just the result.
- •
Monitoring
- •Track pass/fail/review rates by channel and product line.
- •Sudden spikes in
reviewoften mean your policies are stale or your prompt is too vague.
- •
Guardrails
- •Never allow the agent to approve high-risk communications without human sign-off.
- •Add hard rules for prohibited phrases like “guaranteed approval” or “instant loan approval” unless your legal team has explicitly approved them.
Common Pitfalls
- •
Using generic retrieval over unapproved documents
- •If your vector store contains old PDFs or draft policies, the agent will cite them confidently.
- •Fix it by indexing only versioned compliance sources with document IDs and effective dates.
- •
Letting the model see raw sensitive data
- •Customer account numbers, card details, and complaint notes can end up in prompts or traces.
- •Fix it with redaction before invocation and strict logging controls after invocation.
- •
Treating low confidence as pass
- •In banking workflows, ambiguity should usually escalate to review.
- •Fix it by making
reviewa first-class outcome and wiring it into an analyst queue instead of auto-release.
- •
Skipping jurisdiction checks
- •A rule valid in one market may be wrong in another.
- •Fix it by passing
jurisdictioninto retrieval filters so UK retail banking content does not use US-only guidance.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit