How to Build a compliance checking Agent Using AutoGen in TypeScript for lending
A compliance checking agent for lending reviews a loan application against policy, regulatory rules, and internal credit standards before a human underwriter approves it. It matters because lending decisions need to be explainable, auditable, and consistent across applicants, channels, and jurisdictions.
Architecture
- •
Input adapter
- •Normalizes application data from LOS, CRM, or underwriting systems.
- •Redacts or tokenizes sensitive fields before sending anything to the model.
- •
Policy retrieval layer
- •Pulls current lending policy snippets, product rules, and jurisdiction-specific constraints.
- •Keeps the agent grounded in approved source material instead of free-form reasoning.
- •
Compliance agent
- •Uses AutoGen
AssistantAgentto inspect the application against policy. - •Produces structured findings: pass, fail, needs-review, and rationale.
- •Uses AutoGen
- •
Human review gate
- •Routes ambiguous or high-risk cases to an underwriter or compliance officer.
- •Prevents the agent from making final credit decisions.
- •
Audit log store
- •Persists prompts, retrieved policy references, model outputs, and final disposition.
- •Supports regulator review and internal model governance.
- •
Decision API
- •Exposes the result back to the lending platform as JSON.
- •Returns machine-readable flags for downstream workflow orchestration.
Implementation
1) Install AutoGen and define the compliance schema
For TypeScript, use the AutoGen package that exposes AssistantAgent, UserProxyAgent, and OpenAIChatCompletionClient. Keep the output shape strict so downstream systems can consume it without parsing prose.
npm install @autogenai/autogen
import {
AssistantAgent,
UserProxyAgent,
OpenAIChatCompletionClient,
} from "@autogenai/autogen";
type LoanApplication = {
applicantId: string;
state: string;
productType: "personal_loan" | "auto_loan" | "mortgage";
income: number;
monthlyDebt: number;
requestedAmount: number;
declaredPurpose: string;
};
type ComplianceFinding = {
status: "pass" | "fail" | "needs_review";
reasons: string[];
policyReferences: string[];
};
const complianceSchemaHint = `
Return only valid JSON:
{
"status": "pass" | "fail" | "needs_review",
"reasons": string[],
"policyReferences": string[]
}
`;
2) Create the assistant with lending-specific instructions
The important part here is that the agent is not a general chat assistant. It must act like a compliance reviewer that cites policy references and avoids making credit decisions outside its scope.
const modelClient = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
});
const complianceAgent = new AssistantAgent({
name: "lending_compliance_agent",
modelClient,
systemMessage: `
You are a lending compliance checker.
Your job is to review loan applications against provided policy text only.
Do not approve or deny loans. Only produce compliance findings.
Flag missing information, policy conflicts, residency concerns, and suspicious inconsistencies.
Always cite exact policy references from the provided context.
${complianceSchemaHint}
`,
});
const userProxy = new UserProxyAgent({
name: "system",
});
3) Run an application through the agent with policy context
In production you would fetch policy text from your approved knowledge base. The key pattern is to inject only relevant jurisdictional rules and then force structured output.
async function checkCompliance(app: LoanApplication): Promise<ComplianceFinding> {
const debtToIncome = app.monthlyDebt / Math.max(app.income / 12, 1);
const policyContext = `
Policy:
- Personal loans in NY require documented income verification.
- Debt-to-income ratio above 0.45 requires manual review.
- Applications must not contain unverified purpose claims for amounts above $25,000.
- Applicant data must remain in-region for EU residents.
`;
const task = `
Review this loan application for compliance:
${JSON.stringify(
{
...app,
debtToIncomeRatio: Number(debtToIncome.toFixed(2)),
},
null,
2
)}
${policyContext}
`;
const result = await userProxy.initiateChat(complianceAgent, task);
const content = result.messages.at(-1)?.content ?? "{}";
return JSON.parse(content) as ComplianceFinding;
}
async function main() {
const app: LoanApplication = {
applicantId: "app_123",
state: "NY",
productType: "personal_loan",
income: 72000,
monthlyDebt: 2800,
requestedAmount: 30000,
declaredPurpose: "home renovation",
};
const finding = await checkCompliance(app);
console.log(finding);
}
main().catch(console.error);
That pattern gives you a simple control flow:
- •compute deterministic features outside the model
- •inject only relevant policy text
- •force JSON output
- •parse into a typed object
If your AutoGen version supports tool calling in your setup, you can extend this by exposing a retrieval tool for policy lookup rather than embedding raw text. For lending workflows, that is usually better because it keeps policies versioned and auditable.
4) Add hard guards before returning results
Do not let the LLM be the last line of defense. Validate its output and block malformed responses from reaching underwriting workflows.
function validateFinding(finding: ComplianceFinding): ComplianceFinding {
if (!["pass", "fail", "needs_review"].includes(finding.status)) {
throw new Error("Invalid compliance status");
}
if (!Array.isArray(finding.reasons) || finding.reasons.length === undefined) {
throw new Error("Invalid reasons");
}
if (!Array.isArray(finding.policyReferences)) {
throw new Error("Invalid policy references");
}
return finding;
}
Use this validation after parsing. In lending systems, malformed output is not just a bug; it becomes an audit problem.
Production Considerations
- •
Keep regulated data residency in scope
- •Route EU borrower data to an EU-hosted inference endpoint where required.
- •Never send full PII if a masked version is enough for compliance review.
- •
Log every decision artifact
- •Store input hash, prompt version, retrieved policy version, model response, and final disposition.
- •Regulators care about reproducibility more than clever prompts.
- •
Put human review on uncertain cases
- •Any
needs_reviewresult should create an underwriting task automatically. - •High DTI ratios, missing income docs, or cross-border residency issues should never auto-pass.
- •Any
- •
Version policies separately from code
- •A code deploy should not silently change lending rules.
- •Tie each decision to a specific policy snapshot so audits can reconstruct intent.
Common Pitfalls
- •
Using free-form responses
- •Problem: The agent returns prose that downstream services cannot reliably parse.
- •Fix: Force JSON output and validate it before using it anywhere else.
- •
Sending raw PII into prompts
- •Problem: You expose SSNs, bank account numbers, or full addresses unnecessarily.
- •Fix: Redact fields before calling
AssistantAgent, and only pass what the rule actually needs.
- •
Treating the model as final authority
- •Problem: The agent approves or rejects loans without deterministic checks or human oversight.
- •Fix: Keep score thresholds, residency checks, and mandatory-document rules outside the model. Use AutoGen for reasoning support, not final credit adjudication.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit