How to Build a compliance checking Agent Using LangChain in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglangchaintypescriptlending

A compliance checking agent for lending reviews loan applications, supporting documents, and policy rules before a human underwriter or automated decisioning system approves the case. It matters because lending decisions are regulated, auditable, and high-risk: if your agent misses a disclosure issue, fair lending concern, or jurisdiction-specific rule, you create legal exposure and bad credit decisions.

Architecture

  • Document ingestion layer

    • Pulls borrower data from application forms, income docs, bank statements, and policy PDFs.
    • Normalizes text before it hits the model.
  • Policy retrieval layer

    • Uses embeddings + vector search to fetch the relevant lending policy sections.
    • Keeps the agent grounded in current internal policy and jurisdiction rules.
  • Compliance reasoning chain

    • Compares application facts against retrieved policy.
    • Produces a structured verdict: pass, fail, or needs human review.
  • Audit log store

    • Persists inputs, retrieved policy snippets, model output, and final decision.
    • Required for traceability in lending workflows.
  • Guardrail layer

    • Blocks unsupported claims, missing evidence, and risky outputs.
    • Forces structured JSON so downstream systems can consume results safely.
  • Human review handoff

    • Escalates borderline cases to compliance officers or underwriters.
    • Prevents the agent from making final decisions on ambiguous cases.

Implementation

1) Install dependencies and set up the model

Use LangChain’s TypeScript packages with a chat model that supports structured output. For production lending workflows, keep the model behind your own service boundary so you can control logging and residency.

npm install langchain @langchain/openai @langchain/community zod
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const ComplianceResultSchema = z.object({
  decision: z.enum(["pass", "fail", "review"]),
  reasons: z.array(z.string()),
  policyReferences: z.array(z.string()),
  missingEvidence: z.array(z.string()),
});

type ComplianceResult = z.infer<typeof ComplianceResultSchema>;

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

2) Load lending policy into a retriever

This example uses a vector store retriever so the agent can fetch the exact policy clauses relevant to the loan type, geography, and product. In lending, this is better than stuffing every rule into the prompt.

import { MemoryVectorStore } from "@langchain/community/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Document } from "@langchain/core/documents";

const docs = [
  new Document({
    pageContent:
      "For unsecured personal loans above $50k, verify two recent pay stubs and one bank statement.",
    metadata: { source: "policy-manual", section: "4.2" },
  }),
  new Document({
    pageContent:
      "Applicants in California require adverse action notices when declining due to insufficient income verification.",
    metadata: { source: "policy-manual", section: "7.1", jurisdiction: "CA" },
  }),
];

const vectorStore = await MemoryVectorStore.fromDocuments(
  docs,
  new OpenAIEmbeddings()
);

const retriever = vectorStore.asRetriever(3);

3) Build the compliance chain with structured output

The key pattern is: retrieve relevant policy text first, then ask the model to evaluate the application against those rules, then force a schema. This keeps outputs deterministic enough for downstream automation.

import { RunnableSequence } from "@langchain/core/runnables";
import { PromptTemplate } from "@langchain/core/prompts";

const prompt = PromptTemplate.fromTemplate(`
You are a lending compliance checker.

Application:
{application}

Relevant policy:
{policy}

Return only valid JSON with this shape:
{
  "decision": "pass" | "fail" | "review",
  "reasons": string[],
  "policyReferences": string[],
  "missingEvidence": string[]
}
`);

const complianceChain = RunnableSequence.from([
  async (input: { application: string }) => {
    const retrieved = await retriever.invoke(input.application);
    return {
      application: input.application,
      policy: retrieved.map((d) => `${d.metadata.section}: ${d.pageContent}`).join("\n"),
    };
  },
  prompt,
  llm.withStructuredOutput(ComplianceResultSchema),
]);

const result = await complianceChain.invoke({
  application:
    "Borrower requests $75k unsecured personal loan in CA. Provided one pay stub and no bank statement.",
});

console.log(result.decision);
console.log(result.reasons);

4) Add audit logging and human escalation

In lending, every decision needs an audit trail. Store what was checked, what policy was used, and why the agent escalated.

async function runComplianceCheck(applicationText: string) {
  const result = await complianceChain.invoke({ application: applicationText });

  const auditRecord = {
    timestamp: new Date().toISOString(),
    applicationText,
    result,
    reviewer: result.decision === "review" ? "human-compliance" : "agent",
    systemVersion: "compliance-agent-v1",
  };

  // Replace with your database / SIEM / immutable log sink
  console.log("AUDIT:", JSON.stringify(auditRecord));

  if (result.decision === "review") {
    return {
      status: "escalate",
      reason: result.reasons.join("; "),
      missingEvidence: result.missingEvidence,
    };
  }

  return {
    status: result.decision,
    reasons: result.reasons,
    references: result.policyReferences,
  };
}

Production Considerations

  • Keep data residency explicit

    • Borrower PII may need to stay in-region depending on your regulator and internal policy.
    • Use private deployment options or regional endpoints for embeddings and LLM calls.
  • Log every retrieval and decision

    • Store prompt inputs, retrieved clauses, model version, timestamps, and final verdict.
    • This is non-negotiable when auditors ask why a loan was flagged or approved.
  • Use hard guardrails for final decisions

    • The agent should recommend pass, fail, or review, not auto-disburse funds.
    • Route all adverse decisions through business rules plus human review where required by law or policy.
  • Monitor drift in policies and outcomes

    • If underwriting rules change but your vector index is stale, the agent will make wrong calls.
    • Re-index policies on every approved update and track false positives/negatives by product line.

Common Pitfalls

  • Stuffing raw borrower PII into prompts without minimization

    • Only send fields needed for the specific compliance check.
    • Mask account numbers, SSNs, and other sensitive identifiers before model invocation.
  • Treating retrieval as optional

    • A compliance agent without policy grounding will hallucinate rules.
    • Always retrieve current lending policies before asking for a verdict.
  • Letting the model output free-form text

    • Free-form responses are hard to validate and impossible to automate safely.
    • Use withStructuredOutput() with a Zod schema so downstream systems get predictable results.

If you build this pattern correctly, you get a compliance assistant that is useful in underwriting operations without pretending to be the final authority. That’s the right shape for lending: grounded in policy, auditable by design, and easy to escalate when the case is not clean.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides