How to Build a compliance checking Agent Using CrewAI in TypeScript for lending
A compliance checking agent for lending reviews loan applications, supporting documents, and decision outputs against policy and regulatory rules before anything is approved or sent downstream. In lending, that matters because a single missed disclosure, an unfair adverse-action reason, or a data-residency violation can turn into regulatory exposure fast.
Architecture
- •
Input adapter
- •Normalizes application payloads from LOS, CRM, or broker portals.
- •Strips non-required PII before the agent sees it.
- •
Policy retrieval layer
- •Pulls the latest lending policy, product eligibility rules, and jurisdiction-specific requirements.
- •Keeps policy text versioned for audit.
- •
Compliance analyst agent
- •Checks the application against rules like income verification, DTI thresholds, KYC/AML flags, adverse-action reason requirements, and document completeness.
- •
Audit logger
- •Stores every check, rule hit, model output, and source document reference.
- •Produces evidence for internal audit and regulators.
- •
Decision gate
- •Converts the agent output into allow / review / reject / escalate.
- •Never lets the LLM make the final credit decision without deterministic rules.
- •
Residency and redaction guardrail
- •Ensures sensitive data stays in-region and only approved fields are passed to the model.
Implementation
1) Install CrewAI for TypeScript and define your compliance schema
You want a strict output contract. For lending, free-form text is useless unless you can map it to a control or audit record.
npm install @crewai/crewai zod
import { z } from "zod";
export const ComplianceFindingSchema = z.object({
status: z.enum(["pass", "review", "fail"]),
ruleId: z.string(),
severity: z.enum(["low", "medium", "high"]),
evidence: z.array(z.string()),
rationale: z.string(),
});
export const ComplianceReportSchema = z.object({
applicationId: z.string(),
overallStatus: z.enum(["pass", "review", "fail"]),
findings: z.array(ComplianceFindingSchema),
});
2) Build the compliance agent with a narrow role
Use one agent for analysis and keep the instructions explicit. For lending, that means the agent checks policy adherence, not creditworthiness.
import { Agent } from "@crewai/crewai";
export const complianceAgent = new Agent({
name: "Lending Compliance Checker",
role: "Loan compliance analyst",
goal:
"Review lending applications for policy violations, missing disclosures, residency constraints, and adverse-action readiness.",
backstory:
"You are a senior lending compliance analyst. You do not make credit decisions. You only identify policy gaps and regulatory risks.",
verbose: true,
});
3) Create tasks that force evidence-based review
The task should tell the model exactly what to inspect. In lending, I usually split this into three checks: eligibility, disclosures, and auditability.
import { Task } from "@crewai/crewai";
import { ComplianceReportSchema } from "./schemas";
import { complianceAgent } from "./agent";
export const reviewLoanApplicationTask = new Task({
description: `
Review this loan application for compliance issues.
Check:
- Required disclosures are present
- Income documentation is sufficient
- DTI/LTV policy thresholds are respected
- KYC/AML flags are noted
- Adverse action reasons are supportable
- Data residency constraints are not violated
Return only structured findings with rule IDs and evidence.
`,
expectedOutput:
"A structured compliance report with overall status and specific findings.",
agent: complianceAgent,
});
4) Run the crew and post-process into an auditable result
This is where you wire in your application payload. Keep raw PII out of the prompt if you can; pass references or redacted fields instead.
import { Crew } from "@crewai/crewai";
import { reviewLoanApplicationTask } from "./task";
import { ComplianceReportSchema } from "./schemas";
const crew = new Crew({
agents: [reviewLoanApplicationTask.agent],
tasks: [reviewLoanApplicationTask],
});
async function main() {
const application = {
applicationId: "LN-10492",
productType: "personal_loan",
jurisdiction: "US-NY",
incomeVerified: true,
dti: 0.41,
ltv: null,
disclosuresAccepted: ["privacy_notice", "credit_pull_consent"],
amlFlagged: false,
dataRegion: "us-east-1",
requiredRegion: "us-east-1",
};
const result = await crew.kickoff({
inputs: {
application,
policyVersion: "2026.01",
rulesetRef: "lending-policy-v12",
},
});
const parsed = ComplianceReportSchema.parse(result.raw);
console.log(JSON.stringify(parsed, null, 2));
}
main().catch(console.error);
If you need stronger control over tool usage, add a tool that fetches policy by version rather than embedding policy text directly in prompts. That gives you repeatability when auditors ask which rule set was applied on a specific date.
Production Considerations
- •
Deploy in-region
- •Keep inference inside the same geography as your lending data.
- •If your policy says EU borrower data must stay in EU storage and compute, enforce that before any agent call.
- •
Log everything
- •Store prompt inputs after redaction, retrieved policy version, model output, final decision code, and human override.
- •You need this trail for fair lending reviews and internal audits.
- •
Use deterministic gates after the agent
- •Let the agent identify issues.
- •Let hard-coded rules decide whether a file can auto-pass or must be escalated to compliance ops.
- •
Monitor false positives by product type
- •Personal loans, auto loans, mortgages, and SME lending have different thresholds.
- •Track precision by product so your review queue does not explode on one segment.
Common Pitfalls
- •
Letting the model make the final lending decision
- •The agent should flag compliance risk, not approve or deny credit.
- •Put a deterministic decision layer after it.
- •
Sending full PII into prompts
- •Do not dump bank statements, SSNs, or full identity docs into context unless absolutely required.
- •Redact first and pass only fields needed for compliance checks.
- •
Not versioning policies
- •If you cannot reproduce which rule set was used on a specific application date, your audit trail is weak.
- •Store
policyVersion,rulesetRef, and model version with every run.
- •Store
- •If you cannot reproduce which rule set was used on a specific application date, your audit trail is weak.
A good lending compliance agent is boring in the right way. It produces structured findings, points to evidence, respects residency constraints, and leaves an audit trail strong enough for legal and risk teams to trust it.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit