How to Build a compliance checking Agent Using LangChain in TypeScript for fintech
A compliance checking agent reviews fintech content, transactions, or customer-facing messages against policy rules before they go live. It matters because a bad approval can mean regulatory exposure, audit failure, blocked payments, or a customer communication that violates internal controls.
Architecture
- •
Input adapter
- •Accepts the artifact to check: payment memo, KYC note, marketing copy, support reply, or transaction summary.
- •Normalizes it into a structured payload.
- •
Policy retriever
- •Pulls relevant policies from a controlled source like a vector store or document store.
- •Keeps the agent grounded in current AML, KYC, sanctions, privacy, and marketing rules.
- •
Compliance reasoning chain
- •Uses an LLM to compare the input against retrieved policy snippets.
- •Produces a structured verdict:
pass,review, orreject.
- •
Audit logger
- •Stores the input, retrieved policy context, model output, timestamps, and reviewer decision.
- •This is non-negotiable for fintech auditability.
- •
Escalation router
- •Sends borderline cases to a human compliance officer.
- •Prevents the agent from making final decisions on ambiguous cases.
Implementation
1) Define the compliance schema and prompt
Start by forcing structured output. In fintech, free-form answers are hard to audit and harder to route into downstream systems.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const ComplianceVerdictSchema = z.object({
verdict: z.enum(["pass", "review", "reject"]),
reasons: z.array(z.string()),
policyReferences: z.array(z.string()),
});
type ComplianceVerdict = z.infer<typeof ComplianceVerdictSchema>;
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a fintech compliance reviewer.
Use only the provided policy context.
If the input is ambiguous or missing required details, return review.
Never invent policy.`,
],
[
"human",
`Policy context:
{policyContext}
Artifact:
{artifact}
Return a concise compliance decision with reasons and policy references.`,
],
]);
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
2) Retrieve relevant policies with LangChain
Use a vector store for policy retrieval. The exact store can vary; the pattern below uses MemoryVectorStore for clarity. In production, swap this for Pinecone, pgvector, or OpenSearch depending on your residency constraints.
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
const embeddings = new OpenAIEmbeddings();
const policyDocs = [
new Document({
pageContent:
"Marketing copy must not guarantee returns or imply risk-free investing.",
metadata: { id: "MKT-001" },
}),
new Document({
pageContent:
"Customer data must not be sent to third parties without lawful basis and approval.",
metadata: { id: "PRIV-004" },
}),
new Document({
pageContent:
"Transactions above threshold require enhanced due diligence and sanctions screening.",
metadata: { id: "AML-012" },
}),
];
const vectorStore = await MemoryVectorStore.fromDocuments(policyDocs, embeddings);
3) Build the chain and return a structured verdict
This is the core pattern. Retrieve policies first, then ask the model to judge only against those policies. Use RunnableSequence so each step is explicit and testable.
import { RunnableSequence } from "@langchain/core/runnables";
async function checkCompliance(artifact: string): Promise<ComplianceVerdict> {
const relevantPolicies = await vectorStore.similaritySearch(artifact, 3);
const policyContext = relevantPolicies
.map((doc) => `[${doc.metadata.id}] ${doc.pageContent}`)
.join("\n");
const chain = RunnableSequence.from([
async (input: { artifact: string; policyContext: string }) => ({
artifact: input.artifact,
policyContext: input.policyContext,
}),
prompt,
model.withStructuredOutput(ComplianceVerdictSchema),
]);
return chain.invoke({ artifact, policyContext });
}
const result = await checkCompliance(
"Please send this customer $25k transfer immediately. No need for extra checks."
);
console.log(result.verdict);
console.log(result.reasons);
console.log(result.policyReferences);
4) Add an escalation rule before final action
Do not let the model directly execute actions like releasing payments or sending messages. Put a deterministic gate in front of any side effect.
function shouldEscalate(verdict: ComplianceVerdict): boolean {
return verdict.verdict !== "pass" || verdict.reasons.length > 0;
}
const decision = await checkCompliance(
"Can we waive KYC for this high-value corporate account?"
);
if (shouldEscalate(decision)) {
// Send to human review queue
console.log("Route to compliance officer:", decision);
} else {
// Safe to continue workflow
console.log("Approved by agent");
}
Production Considerations
- •
Audit logging
- •Persist the original artifact, retrieved policy IDs, model version, prompt version, and final verdict.
- •Store immutable logs in your SIEM or WORM-capable storage.
- •
Data residency
- •Keep customer data in-region if your regulatory footprint requires it.
- •If you process EU customer data, make sure embeddings, logs, and inference endpoints stay within approved jurisdictions.
- •
Guardrails
- •Use strict structured output with Zod.
- •Block execution on
revieworreject; never allow the LLM to directly approve money movement or legal exceptions.
- •
Monitoring
- •Track false positives, false negatives, escalation rate, and time-to-human-review.
- •Alert when policy retrieval returns low similarity scores or when the model starts over-relying on generic language.
Common Pitfalls
- •
Letting the model decide without retrieval
- •If you skip policy retrieval, you get generic advice instead of grounded compliance checks.
- •Fix it by always injecting specific policy snippets into the prompt.
- •
Using unstructured text responses
- •Free-form answers are hard to parse and impossible to enforce reliably in workflows.
- •Fix it with
withStructuredOutput()plus a Zod schema.
- •
Treating the agent as an approver
- •A compliance agent should usually recommend and escalate, not execute final business actions.
- •Fix it by routing all borderline cases to humans and keeping deterministic approval rules outside the LLM.
A good fintech compliance agent is boring in the right way. It retrieves the right rules, produces a traceable verdict, escalates ambiguity fast, and leaves an audit trail that stands up under review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit