How to Build a loan approval Agent Using CrewAI in Python for lending
A loan approval agent automates the first pass of lending decisions: it gathers applicant data, checks policy rules, scores risk, and produces a decision package for a human underwriter. For lending teams, this matters because it reduces manual review time, enforces policy consistency, and creates an audit trail you can defend during compliance reviews.
Architecture
- •
Applicant intake layer
- •Collects borrower profile, income, employment, liabilities, and requested loan terms.
- •Normalizes inputs before they hit the agent workflow.
- •
Policy retrieval component
- •Pulls the current lending policy, eligibility thresholds, and product rules.
- •Keeps decisions aligned with versioned underwriting policy.
- •
Risk analysis agent
- •Evaluates affordability, debt-to-income ratio, credit signals, and exceptions.
- •Produces a structured recommendation instead of free-form text.
- •
Compliance reviewer agent
- •Checks adverse action requirements, fair lending constraints, and prohibited attributes.
- •Flags missing disclosures or unsupported decision factors.
- •
Decision orchestrator
- •Coordinates the agents in sequence using CrewAI.
- •Assembles a final recommendation: approve, refer, or decline.
- •
Audit logging layer
- •Stores inputs, outputs, policy version, timestamps, and reasoning traces.
- •Supports model governance and regulatory review.
Implementation
1) Install dependencies and define your data model
Use CrewAI with typed inputs so your workflow stays predictable. For lending systems, keep the application payload explicit; don’t pass raw JSON blobs around without schema discipline.
from pydantic import BaseModel, Field
from typing import Optional
class LoanApplication(BaseModel):
applicant_id: str
annual_income: float = Field(gt=0)
monthly_debt: float = Field(ge=0)
requested_amount: float = Field(gt=0)
term_months: int = Field(gt=0)
credit_score: int = Field(ge=300, le=850)
state: str
product_type: str
existing_customer: bool = False
notes: Optional[str] = None
2) Create agents with narrow responsibilities
CrewAI works best when each agent has one job. Keep the risk logic separate from compliance logic so you can audit each step independently.
from crewai import Agent
risk_agent = Agent(
role="Loan Risk Analyst",
goal="Assess affordability and credit risk for a loan application",
backstory="You analyze lending applications using policy thresholds and risk signals.",
verbose=True,
)
compliance_agent = Agent(
role="Compliance Reviewer",
goal="Check the recommendation for fair lending and disclosure issues",
backstory="You ensure decisions are explainable and compliant with lending policy.",
verbose=True,
)
decision_agent = Agent(
role="Loan Decision Specialist",
goal="Produce a final decision summary for underwriting review",
backstory="You synthesize risk and compliance findings into a concise decision memo.",
verbose=True,
)
3) Define tasks and run a Crew
This is the core pattern. Each task should produce structured output that downstream tasks can consume. In production, wire the task descriptions to real policy documents and internal scoring services.
from crewai import Task, Crew, Process
risk_task = Task(
description=(
"Review the loan application and calculate key lending metrics. "
"Return DTI estimate, affordability concerns, and one of: APPROVE_RECOMMENDATION, REFER_REVIEW, DECLINE_RECOMMENDATION."
),
expected_output="A concise risk assessment with a recommendation code.",
agent=risk_agent,
)
compliance_task = Task(
description=(
"Review the risk assessment for fair lending concerns, missing disclosures, "
"and any prohibited or unsupported decision factors. Return compliance status."
),
expected_output="A compliance review with PASS or FAIL plus rationale.",
agent=compliance_agent,
)
decision_task = Task(
description=(
"Combine the risk assessment and compliance review into a final underwriting memo. "
"Include decision code, reasons for decision, required follow-up items, and audit notes."
),
expected_output="A final underwriting memo suitable for human review.",
agent=decision_agent,
)
crew = Crew(
agents=[risk_agent, compliance_agent, decision_agent],
tasks=[risk_task, compliance_task, decision_task],
process=Process.sequential,
verbose=True,
)
application = LoanApplication(
applicant_id="APP-10021",
annual_income=95000,
monthly_debt=1450,
requested_amount=18000,
term_months=48,
credit_score=712,
state="CA",
product_type="personal_loan",
)
result = crew.kickoff(inputs={"application": application.model_dump()})
print(result)
4) Add guardrails around the agent output
Do not let an LLM make an unbounded lending decision. Use deterministic checks for hard rules like minimum score thresholds or maximum DTI; let CrewAI handle explanation and synthesis.
def hard_rule_check(app: LoanApplication) -> list[str]:
issues = []
dti = app.monthly_debt / (app.annual_income / 12)
if app.credit_score < 620:
issues.append("Credit score below minimum threshold")
if dti > 0.45:
issues.append("DTI above maximum threshold")
return issues
If hard_rule_check() returns violations, route to manual review before calling crew.kickoff(). That keeps your system defensible when auditors ask why a file was declined.
Production Considerations
- •
Deployment
- •Run the CrewAI workflow behind an internal API service.
- •Keep policy documents and applicant data in approved regions to satisfy data residency requirements.
- •Separate PII storage from model prompts where possible.
- •
Monitoring
- •Log every
kickoff()run with application ID, policy version, timestamps, task outputs, and final recommendation. - •Track referral rates by branch, product type, geography, and protected-class proxies to catch drift or bias patterns.
- •Alert on malformed outputs or missing compliance fields.
- •Log every
- •
Guardrails
- •Enforce deterministic pre-checks for minimum score floors, max DTI limits, KYC status, and fraud flags.
- •Block any prompt content that includes prohibited attributes like race or religion.
- •Require human approval on declines above a configured exposure threshold.
- •
Auditability
- •Store prompt templates and task definitions in version control.
- •Persist the exact agent outputs used in each decision file.
- •Make sure adverse action reasons map to approved reason codes.
Common Pitfalls
- •
Using one agent for everything
- •This makes debugging painful and weakens auditability.
- •Split risk analysis, compliance review, and final synthesis into separate agents with narrow goals.
- •
Letting the LLM decide hard policy rules
- •If the model is deciding score cutoffs or legal eligibility on its own you will get inconsistent outcomes.
- •Put those rules in deterministic Python before invoking CrewAI.
- •
Ignoring explainability requirements
- •A good recommendation is not enough if you cannot explain it in regulatory terms.
- •Always generate reason codes tied to policy language and store them with the case file.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit