How to Build a loan approval Agent Using CrewAI in TypeScript for lending
A loan approval agent automates the first pass of a lending decision: it gathers applicant data, checks policy constraints, analyzes risk signals, and produces a recommendation with an audit trail. For lenders, this matters because it reduces manual review load, shortens turnaround time, and enforces consistent underwriting rules without replacing human approval where regulations require it.
Architecture
- •
Applicant intake layer
- •Accepts borrower data from your LOS, CRM, or application form.
- •Normalizes fields like income, debt, employment type, and requested amount.
- •
Policy and compliance agent
- •Checks hard rules: minimum credit score, DTI thresholds, KYC/AML flags, product eligibility.
- •Produces a decision trace that compliance teams can review.
- •
Risk analysis agent
- •Summarizes risk factors from structured data and external sources.
- •Flags exceptions such as unstable income or thin credit file.
- •
Decision synthesizer
- •Combines policy output and risk analysis into approve / reject / manual review.
- •Returns a reasoned recommendation instead of a black-box score.
- •
Audit logger
- •Persists prompts, tool outputs, model versions, and final decisions.
- •Needed for model governance, dispute handling, and regulator review.
- •
Human review handoff
- •Escalates edge cases to an underwriter.
- •Keeps the agent inside decision-support boundaries where required.
Implementation
1) Install CrewAI for TypeScript and define your domain types
The TypeScript SDK exposes the same core concepts you use in Python: Agent, Task, Crew, and Process. Keep your lending schema explicit so you can validate inputs before any LLM call.
npm install @crew-ai/crewai zod
import { z } from "zod";
import { Agent, Crew, Process, Task } from "@crew-ai/crewai";
const LoanApplicationSchema = z.object({
applicantId: z.string(),
loanAmount: z.number().positive(),
annualIncome: z.number().nonnegative(),
monthlyDebt: z.number().nonnegative(),
creditScore: z.number().int().min(300).max(850),
employmentStatus: z.enum(["employed", "self_employed", "unemployed"]),
country: z.string(),
});
type LoanApplication = z.infer<typeof LoanApplicationSchema>;
export const application: LoanApplication = LoanApplicationSchema.parse({
applicantId: "app_10021",
loanAmount: 25000,
annualIncome: 98000,
monthlyDebt: 1450,
creditScore: 712,
employmentStatus: "employed",
country: "US",
});
2) Create specialized agents for policy review and risk analysis
Do not make one general-purpose agent do everything. In lending, separation of duties is useful for auditability and easier calibration. The policy agent should be strict; the risk agent should explain signals.
const policyAgent = new Agent({
role: "Lending Policy Analyst",
goal:
"Check the application against underwriting policy and return a compliant recommendation.",
backstory:
"You are a conservative lending operations analyst who follows documented underwriting rules exactly.",
});
const riskAgent = new Agent({
role: "Credit Risk Analyst",
goal:
"Assess borrower risk using provided financial data and explain key drivers clearly.",
backstory:
"You analyze income stability, debt burden, credit quality, and exceptions without inventing missing facts.",
});
const policyTask = new Task({
description:
`Review this loan application for hard policy constraints:
${JSON.stringify(application)}
Return:
- DTI estimate
- policy pass/fail
- any compliance flags
- recommendation for manual review if needed`,
expectedOutput: "Structured underwriting policy assessment with explicit reasons.",
});
const riskTask = new Task({
description:
`Analyze the same application for credit risk:
${JSON.stringify(application)}
Focus on:
- repayment capacity
- concentration risk
- thin-file or exception indicators
- what additional documents would reduce uncertainty`,
expectedOutput: "Risk memo with concrete borrower-specific observations.",
});
3) Orchestrate the crew and produce a decision object
Use a sequential process so the policy result is evaluated before the final recommendation. That keeps hard rules ahead of softer narrative analysis.
const crew = new Crew({
agents: [policyAgent, riskAgent],
tasks: [policyTask, riskTask],
process: Process.sequential,
});
async function runLoanReview() {
const result = await crew.kickoff();
return {
applicantId: application.applicantId,
status: "manual_review", // replace with parsed decision logic from outputs
crewOutput: result,
reviewedAt: new Date().toISOString(),
modelVersion: process.env.CREW_MODEL_VERSION ?? "unknown",
};
}
runLoanReview().then((decision) => {
console.log(JSON.stringify(decision, null, 2));
});
In production, parse each task output into a typed contract instead of reading free-form text. A common pattern is to require JSON-only outputs from each task and validate them with Zod before writing to your decision store.
4) Add guardrails around sensitive lending data
Loan files contain PII and regulated data. Redact what the model does not need, keep residency constraints in mind, and store only the minimum necessary audit payload.
function redactForLLM(app: LoanApplication) {
return {
applicantId: app.applicantId,
loanAmount: app.loanAmount,
annualIncome: app.annualIncome,
monthlyDebt: app.monthlyDebt,
creditScore: app.creditScore,
employmentStatus: app.employmentStatus,
country: app.country,
};
}
Use this redacted object in your task descriptions or tool inputs. Keep raw documents in your secure system of record; do not dump bank statements or identity documents directly into prompts unless your legal/compliance team has approved that flow.
Production Considerations
- •
Deployment
Run the agent inside your private VPC or approved cloud region if data residency applies. For multi-jurisdiction lending, pin workloads to region-specific deployments so customer data does not cross borders unnecessarily.
- •
Monitoring
Track approval rate by segment, manual-review rate, false positives on compliance flags, and average turnaround time. Also log prompt version, model version, task outputs, and final human override rate for every decision.
- •
Guardrails
Enforce hard business rules outside the LLM when possible. For example, if DTI exceeds policy max or KYC is incomplete, short-circuit to rejection or manual review before calling the agent.
- •
Auditability
Persist every intermediate artifact with timestamps and immutable IDs. If a borrower disputes a decision or an auditor asks why an application was escalated, you need the exact reasoning chain plus source inputs.
Common Pitfalls
- •Using one generic agent for everything
Use separate agents for policy checks and risk reasoning. Mixing them usually produces inconsistent decisions and makes it harder to prove which rule caused an outcome.
- •Letting the LLM decide hard compliance rules
Do not ask the model to “figure out” regulatory thresholds from context. Encode those thresholds in deterministic code so they are versioned and testable.
- •Skipping output validation
Free-form text is not acceptable as a downstream decision input. Validate every agent response with Zod or similar schemas before storing it or triggering workflow actions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit