How to Build a loan approval Agent Using CrewAI in Python for insurance
A loan approval agent for insurance automates the first pass on policyholder financing requests: it gathers applicant data, checks underwriting and eligibility rules, scores risk, and produces a decision packet for a human reviewer. For insurance teams, this matters because loan decisions often sit next to regulated customer data, require auditability, and need consistent treatment across branches, products, and jurisdictions.
Architecture
- •
Input intake layer
- •Accepts structured application data from CRM, policy admin systems, or an API.
- •Normalizes fields like income, policy tenure, claims history, and jurisdiction.
- •
Policy retrieval tool
- •Pulls underwriting rules, lending thresholds, and product-specific constraints.
- •Keeps the agent aligned with current insurance policy documents.
- •
Risk analysis agent
- •Evaluates repayment risk using applicant profile and insurance context.
- •Flags missing data, contradictory fields, or high-risk indicators.
- •
Compliance review agent
- •Checks for adverse action requirements, fair lending concerns, consent handling, and jurisdiction-specific restrictions.
- •Produces an auditable rationale for every recommendation.
- •
Decision orchestrator
- •Coordinates the agents in sequence and returns approve / reject / manual review.
- •Ensures deterministic outputs for downstream systems.
- •
Audit logging sink
- •Stores prompts, tool outputs, final recommendation, timestamps, and model version.
- •Required for internal review and regulator-facing traceability.
Implementation
1) Install CrewAI and define your inputs
Start with a small but explicit schema. Insurance workflows break when you let the model infer too much from free text.
from crewai import Agent, Task, Crew, Process
from crewai.tools import BaseTool
from pydantic import BaseModel
from typing import Optional
class LoanApplication(BaseModel):
applicant_id: str
annual_income: float
requested_amount: float
policy_tenure_years: int
claims_count_24m: int
jurisdiction: str
consent_to_process: bool
employment_status: Optional[str] = None
application = LoanApplication(
applicant_id="INS-10492",
annual_income=78000,
requested_amount=12000,
policy_tenure_years=6,
claims_count_24m=1,
jurisdiction="US-NY",
consent_to_process=True,
employment_status="full_time"
)
2) Create agents with narrow responsibilities
Do not build one giant agent. Split underwriting judgment from compliance review so each step is easier to test and audit.
risk_agent = Agent(
role="Loan Risk Analyst",
goal="Assess repayment risk using insurance-linked applicant data",
backstory="You evaluate loan applications for an insurance lender.",
verbose=True,
)
compliance_agent = Agent(
role="Insurance Compliance Reviewer",
goal="Check regulatory and policy compliance before any decision is issued",
backstory="You verify consent, jurisdiction rules, and adverse action readiness.",
verbose=True,
)
3) Add tools for policy lookup and rule checks
Use tools for facts that must not be hallucinated. In production this usually means a document store or rules engine behind the tool.
class PolicyLookupTool(BaseTool):
name: str = "policy_lookup"
description: str = "Return current lending thresholds for insurance products."
def _run(self, jurisdiction: str) -> str:
policies = {
"US-NY": "Max amount=15000; claims in last 24m must be <=2; manual review if income/requested_amount < 5.",
"US-TX": "Max amount=20000; consent required; manual review if tenure < 2 years."
}
return policies.get(jurisdiction, "No policy found")
policy_tool = PolicyLookupTool()
risk_agent.tools = [policy_tool]
compliance_agent.tools = [policy_tool]
4) Define tasks and execute the crew
The pattern below creates a deterministic workflow: analyze risk first, then validate compliance. Use Process.sequential so you can reason about every step in an audit trail.
risk_task = Task(
description=(
"Evaluate the loan application for risk. "
f"Application: {application.model_dump()}. "
"Return a short recommendation with reasons."
),
expected_output="Risk assessment with approve/review/reject recommendation.",
agent=risk_agent,
)
compliance_task = Task(
description=(
"Review the same application for compliance issues. "
f"Application: {application.model_dump()}. "
"Use the policy_lookup tool where needed. "
"Return whether the case can proceed or needs manual review."
),
expected_output="Compliance decision with rationale.",
agent=compliance_agent,
)
crew = Crew(
agents=[risk_agent, compliance_agent],
tasks=[risk_task, compliance_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print(result)
If you want a cleaner production shape, wrap kickoff() inside an API endpoint and persist the raw task outputs to your audit store before returning anything to the caller.
Production Considerations
- •
Auditability
- •Store every prompt, tool response, task output, model version, and final decision.
- •Keep immutable logs so compliance teams can reconstruct why a loan was approved or declined.
- •
Data residency
- •Route EU or country-specific applications to region-bound infrastructure.
- •Do not send PII or claims history to models outside approved jurisdictions.
- •
Guardrails
- •Block decisions when consent is missing or when required fields are absent.
- •Add hard rules outside the LLM for max exposure limits, prohibited geographies, and adverse action triggers.
- •
Monitoring
- •Track approval rate drift by product line, branch, age band proxy where legally allowed, and jurisdiction.
- •Alert on spikes in manual reviews or repeated tool failures because those usually indicate bad upstream data or policy changes.
Common Pitfalls
- •
Using one agent for everything
- •This makes debugging painful and increases hallucination risk.
- •Split risk scoring and compliance into separate agents with explicit task boundaries.
- •
Letting the model decide hard regulatory rules
- •The model should explain decisions; it should not invent eligibility criteria.
- •Put non-negotiable rules in code or a rules engine behind tools.
- •
Skipping consent and residency checks
- •Insurance data often includes sensitive personal and claims information.
- •Reject requests early if consent is missing or if processing would violate local residency requirements.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit