How to Build a loan approval Agent Using CrewAI in Python for retail banking
A loan approval agent automates the first pass of retail lending decisions: it gathers applicant data, checks policy rules, scores risk, and produces a recommendation with an audit trail. For retail banking, that matters because you want faster turnaround for customers without losing control over compliance, explainability, and credit policy enforcement.
Architecture
- •Application intake agent
- •Extracts structured fields from the loan application: income, employment status, existing debt, requested amount, term, and purpose.
- •Policy/compliance checker
- •Validates the application against lending policy: minimum income thresholds, debt-to-income limits, age requirements, KYC completeness, and prohibited-use rules.
- •Risk analyst agent
- •Produces a risk summary from bureau data or internal scoring inputs.
- •Should not make final credit decisions on its own if your bank requires human review.
- •Decision orchestrator
- •Coordinates the agents in sequence using CrewAI
TaskandCrew. - •Combines outputs into approve / refer / decline recommendations.
- •Coordinates the agents in sequence using CrewAI
- •Audit logger
- •Persists every input, intermediate output, and final recommendation for model risk management and regulator review.
- •Human-in-the-loop gate
- •Routes borderline cases to an underwriter when policy exceptions or low-confidence outputs appear.
Implementation
1) Install and define your tools
CrewAI works best when the agents can call deterministic tools for bank data rather than inventing answers. Keep the LLM focused on reasoning and use tools for retrieval and calculations.
from crewai import Agent, Task, Crew, Process
from crewai_tools import tool
import json
@tool("calculate_dti")
def calculate_dti(monthly_debt: float, monthly_income: float) -> str:
"""Calculate debt-to-income ratio."""
if monthly_income <= 0:
return "invalid_income"
dti = monthly_debt / monthly_income
return json.dumps({"dti": round(dti, 4)})
@tool("check_policy_rules")
def check_policy_rules(income: float, dti: float, loan_amount: float) -> str:
"""Basic retail lending policy checks."""
rules = {
"min_income": income >= 2500,
"max_dti": dti <= 0.45,
"max_loan_amount": loan_amount <= 50000
}
return json.dumps(rules)
2) Create specialized agents
Keep each agent narrow. In banking systems, broad “do everything” agents are hard to audit and harder to defend in model governance reviews.
underwriting_agent = Agent(
role="Retail Loan Underwriter",
goal="Assess loan applications against bank policy and produce a decision recommendation.",
backstory=(
"You are a conservative retail banking underwriter. "
"You must follow policy exactly and flag exceptions for human review."
),
tools=[calculate_dti, check_policy_rules],
verbose=True,
)
compliance_agent = Agent(
role="Loan Compliance Reviewer",
goal="Verify KYC completeness and policy compliance before any approval recommendation.",
backstory=(
"You review applications for compliance gaps, missing documentation, "
"and retail banking policy violations."
),
verbose=True,
)
3) Define tasks with explicit outputs
Use Task descriptions that force structured outputs. This makes downstream logging easier and reduces ambiguity in approval workflows.
application_data = {
"customer_id": "CUST-10291",
"monthly_income": 4200,
"monthly_debt": 1350,
"loan_amount": 18000,
"kyc_complete": True,
"employment_status": "full_time",
}
underwriting_task = Task(
description=(
f"Review this loan application JSON: {json.dumps(application_data)}. "
"Calculate DTI using the tool. Check policy rules using the tool. "
"Return a JSON object with fields: dti, policy_result, recommendation, reason."
),
expected_output="Valid JSON containing underwriting recommendation.",
agent=underwriting_agent,
)
compliance_task = Task(
description=(
f"Review this loan application JSON: {json.dumps(application_data)}. "
"Check whether KYC is complete and whether there are any compliance concerns. "
"Return a JSON object with fields: kyc_complete, compliance_flags."
),
expected_output="Valid JSON containing compliance findings.",
agent=compliance_agent,
)
4) Run the crew and combine results
For a production pattern, use sequential processing so compliance checks happen before the final recommendation. That keeps your decision path deterministic enough for audit.
crew = Crew(
agents=[compliance_agent, underwriting_agent],
tasks=[compliance_task, underwriting_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print("Final result:")
print(result)
A practical next step is to post-process result into a bank-friendly decision object:
- •
approve - •
refer_to_underwriter - •
decline - •
reason_codes - •
audit_trace_id
That structure matters because retail lending teams need reason codes for adverse action notices and internal case management.
Production Considerations
- •Deploy behind a policy boundary
- •The agent should never directly approve disbursement.
- •Put it behind an underwriting service that enforces hard rules before any recommendation becomes a decision.
- •Log every step for audit
- •Store prompts, tool inputs/outputs, model version, timestamps, and final recommendation.
- •This is non-negotiable for model risk management and regulator review.
- •Add guardrails around sensitive data
- •Mask PII in logs.
- •Restrict access to bureau data and income statements.
- •Keep customer data within approved regions to satisfy data residency requirements.
- •Monitor drift and exception rates
- •Track approval rate by segment, manual override rate, false declines, and missing-document frequency.
- •If override rates spike, your prompt logic or policy thresholds are drifting.
Common Pitfalls
- •Letting the LLM make unsupported credit decisions
- •Avoid free-form “approve/decline” reasoning without deterministic checks.
- •Use tools for DTI calculations and rule enforcement; keep the model as a reviewer.
- •Skipping structured outputs
- •If the agent returns prose instead of JSON-like fields, downstream systems become brittle.
- •Force fields like
recommendation,reason_codes, andpolicy_result.
- •Ignoring compliance boundaries
- •Don’t feed raw PII into logs or external services without controls.
- •Apply masking, access control, retention policies, and region constraints from day one.
If you want this to hold up in a real retail bank stack, treat CrewAI as the orchestration layer—not the source of truth. The source of truth is your lending policy engine plus auditable tools; CrewAI coordinates them into a workflow that underwriters can trust.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit