How to Build a loan approval Agent Using CrewAI in Python for fintech
A loan approval agent automates the first pass of credit decisioning: it gathers applicant data, checks policy rules, evaluates risk, and produces a decision package for a human underwriter or an automated workflow. For fintech, this matters because you need faster approvals without losing control over compliance, auditability, and data handling.
Architecture
- •
Applicant intake
- •Collects customer profile, income, employment, liabilities, and requested loan terms.
- •Normalizes the payload before any agent sees it.
- •
Policy checker
- •Applies hard rules like minimum income, debt-to-income thresholds, KYC status, and product eligibility.
- •This should be deterministic, not “AI guessed.”
- •
Risk analyst
- •Summarizes credit risk using structured inputs such as bureau score bands, delinquency flags, and affordability signals.
- •Produces a recommendation with reasons.
- •
Compliance reviewer
- •Checks for missing disclosures, adverse action requirements, fair lending concerns, and prohibited attributes.
- •Keeps the system aligned with regulatory expectations.
- •
Decision orchestrator
- •Coordinates the agents in sequence and returns a final decision packet.
- •In production, this is where you keep the workflow auditable.
- •
Audit logger
- •Stores inputs, outputs, timestamps, model version, and rule outcomes.
- •Required for investigations and internal controls.
Implementation
1) Install CrewAI and define your data contract
Keep the input shape explicit. Loan decisions fail when upstream systems send messy JSON with missing fields or inconsistent names.
from pydantic import BaseModel, Field
from typing import Optional
class LoanApplication(BaseModel):
applicant_id: str
annual_income: float = Field(gt=0)
monthly_debt: float = Field(ge=0)
requested_amount: float = Field(gt=0)
credit_score: int = Field(ge=300, le=850)
employment_status: str
kyc_passed: bool
country: str
purpose: str
existing_customer: bool = False
adverse_events_last_12m: int = Field(ge=0)
2) Create agents with narrow responsibilities
Do not build one giant agent that “does everything.” Split policy, risk, and compliance into separate agents so each output is easier to test and audit.
from crewai import Agent
policy_agent = Agent(
role="Loan Policy Checker",
goal="Apply deterministic lending policy to determine eligibility.",
backstory="You evaluate loan applications against hard underwriting rules.",
verbose=True,
)
risk_agent = Agent(
role="Credit Risk Analyst",
goal="Assess repayment risk using provided financial data.",
backstory="You produce structured risk reasoning for consumer loans.",
verbose=True,
)
compliance_agent = Agent(
role="Compliance Reviewer",
goal="Check the application for regulatory and fairness issues.",
backstory="You focus on KYC, adverse action readiness, and prohibited data use.",
verbose=True,
)
3) Define tasks with explicit outputs
Use tasks that force structured reasoning. In fintech, vague prose is a liability.
from crewai import Task
policy_task = Task(
description=(
"Review the loan application for hard policy rules.\n"
"Return only JSON with keys: eligible (bool), reasons (list of strings), "
"max_loan_amount (number), required_manual_review (bool)."
),
expected_output="A JSON object describing policy eligibility.",
agent=policy_agent,
)
risk_task = Task(
description=(
"Analyze repayment risk using income, debt load, credit score,"
" employment status, and adverse events.\n"
"Return only JSON with keys: risk_band (low/medium/high), "
"recommended_apr_range (string), reasons (list of strings)."
),
expected_output="A JSON object describing credit risk.",
agent=risk_agent,
)
compliance_task = Task(
description=(
"Check for compliance issues including KYC status,"
" fair lending concerns, and adverse action readiness.\n"
"Return only JSON with keys: compliant (bool), issues (list of strings), "
"requires_human_review (bool)."
),
expected_output="A JSON object describing compliance findings.",
agent=compliance_agent,
)
4) Run the crew and combine results into a decision packet
This is the actual orchestration pattern. You can wrap it in an API endpoint or a queue worker.
import json
from crewai import Crew, Process
def decide_loan(application: LoanApplication):
crew = Crew(
agents=[policy_agent, risk_agent, compliance_agent],
tasks=[policy_task, risk_task, compliance_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff(inputs={"application": application.model_dump()})
# CrewAI returns a result object/string depending on version/config.
# Keep parsing strict at your boundary.
return {
"applicant_id": application.applicant_id,
"raw_result": str(result),
"decision": "manual_review",
"reason": [
"Parse task outputs into structured JSON before making final decisions."
],
}
app = LoanApplication(
applicant_id="app_10001",
annual_income=95000,
monthly_debt=1200,
requested_amount=15000,
credit_score=720,
employment_status="full_time",
kyc_passed=True,
country="US",
purpose="home_improvement",
existing_customer=True,
adverse_events_last_12m=0,
)
print(decide_loan(app))
In production, you would parse each task’s output into JSON and apply a deterministic decision function:
- •approve if policy is eligible
- •reject if non-compliant or KYC failed
- •manual review if confidence is low or rules conflict
That keeps the LLM as a reasoning layer instead of the source of truth.
Production Considerations
- •
Compliance controls
- •Never let the model use protected attributes like race or religion.
- •Log adverse action reasons in a format legal/compliance teams can review.
- •
Auditability
- •Store raw inputs, task outputs, final decision logic, model version, prompt version, and timestamp.
- •You need replayability for disputes and regulator requests.
- •
Data residency
- •Keep PII in-region if your banking footprint requires it.
- •If you call external LLM APIs, verify where data is processed and whether retention is disabled.
- •
Monitoring
- •Track approval rate drift by segment, manual review rate, false positives on compliance checks, and latency.
- •Alert when rule violations or malformed outputs spike.
Common Pitfalls
- •
Using one agent for everything
- •This creates opaque decisions and brittle prompts.
- •Split underwriting into policy, risk, and compliance tasks.
- •
Letting the model make final approval decisions directly
- •That breaks determinism and makes audits painful.
- •Use CrewAI for analysis; use code for final decisioning.
- •
Ignoring structured output validation
- •Free-form text will break downstream systems.
- •Enforce JSON schemas at the boundary and reject malformed responses before they hit production logic.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit