How to Build a loan approval Agent Using CrewAI in Python for pension funds

By Cyprian AaronsUpdated 2026-04-21
loan-approvalcrewaipythonpension-funds

A loan approval agent for pension funds takes a borrower application, gathers the missing facts, checks policy and risk rules, and produces a decision package that a human credit officer can approve or reject. It matters because pension funds are not just optimizing yield; they are managing retirement capital under strict compliance, audit, and fiduciary constraints.

Architecture

  • Intake agent

    • Normalizes application data from CRM, PDF forms, or API payloads.
    • Extracts borrower identity, requested amount, tenor, collateral, and purpose.
  • Policy compliance agent

    • Checks the application against pension fund lending policy.
    • Flags prohibited sectors, concentration limits, tenor caps, and jurisdiction issues.
  • Risk assessment agent

    • Scores affordability, repayment capacity, historical performance, and collateral coverage.
    • Produces a structured risk note with reasons.
  • Documentation agent

    • Verifies required documents are present.
    • Identifies missing KYC, financial statements, board approvals, or legal opinions.
  • Decision synthesis agent

    • Combines outputs into an approval memo.
    • Recommends approve, reject, or escalate to human review.
  • Audit logger

    • Stores inputs, outputs, tool calls, and final recommendation.
    • Supports regulator review and internal model governance.

Implementation

1) Install CrewAI and define the application context

Start with a clean Python environment and keep the application state explicit. For pension funds, the input schema should be strict because auditability matters more than convenience.

pip install crewai crewai-tools pydantic
from pydantic import BaseModel
from typing import List, Optional

class LoanApplication(BaseModel):
    applicant_name: str
    borrower_type: str
    requested_amount: float
    currency: str
    tenor_months: int
    sector: str
    jurisdiction: str
    collateral_value: float
    annual_revenue: float
    existing_debt: float
    documents: List[str]
    notes: Optional[str] = None

2) Create agents with narrow responsibilities

Use Agent objects with specific roles. Do not build one “smart” agent that does everything; that makes governance harder and failure analysis messy.

from crewai import Agent

policy_agent = Agent(
    role="Pension Fund Credit Policy Analyst",
    goal="Check loan applications against pension fund lending policy and compliance rules",
    backstory=(
        "You review applications for pension fund lending portfolios. "
        "You focus on regulatory compliance, concentration limits, sector restrictions, "
        "and documentation completeness."
    ),
    verbose=True,
)

risk_agent = Agent(
    role="Credit Risk Analyst",
    goal="Assess repayment capacity and collateral adequacy for each loan application",
    backstory=(
        "You produce conservative credit risk assessments for institutional lenders. "
        "You identify leverage concerns, weak coverage ratios, and concentration risks."
    ),
    verbose=True,
)

memo_agent = Agent(
    role="Approval Memo Writer",
    goal="Synthesize findings into a concise decision memo for human review",
    backstory=(
        "You write structured credit memos for investment committees. "
        "You keep recommendations grounded in evidence and avoid unsupported claims."
    ),
    verbose=True,
)

3) Define tasks that produce auditable outputs

Each task should return something a reviewer can inspect. For pension funds, that means clear reasons, not just a binary decision.

from crewai import Task

policy_task = Task(
    description=(
        "Review this loan application for pension fund policy compliance. "
        "Check sector restrictions, jurisdiction risk, document completeness, "
        "and any obvious policy violations. Return bullet points with pass/fail status."
    ),
    expected_output="A compliance assessment with explicit policy flags and missing items.",
    agent=policy_agent,
)

risk_task = Task(
    description=(
        "Assess credit risk for this loan application. "
        "Estimate leverage using existing debt relative to annual revenue and "
        "compare collateral value to requested amount. Return a conservative risk summary."
    ),
    expected_output="A risk assessment with key ratios and an overall risk rating.",
    agent=risk_agent,
)

memo_task = Task(
    description=(
        "Combine the compliance and risk findings into a final approval memo. "
        "Recommend approve, reject, or escalate. Include rationale suitable for audit review."
    ),
    expected_output="A final decision memo with recommendation and reasons.",
    agent=memo_agent,
)

4) Assemble the crew and run the workflow

This is the actual pattern you want in production: typed input → specialized agents → structured memo → human review gate.

from crewai import Crew

def build_loan_approval_crew():
    return Crew(
        agents=[policy_agent, risk_agent, memo_agent],
        tasks=[policy_task, risk_task, memo_task],
        verbose=True,
        process="sequential",
    )

if __name__ == "__main__":
    app = LoanApplication(
        applicant_name="Acme Logistics Ltd",
        borrower_type="Corporate",
        requested_amount=2500000,
        currency="USD",
        tenor_months=36,
        sector="Transport",
        jurisdiction="Kenya",
        collateral_value=4000000,
        annual_revenue=12000000,
        existing_debt=3500000,
        documents=["KYC", "Financial Statements", "Collateral Valuation"],
        notes="Expansion financing"
    )

crew = build_loan_approval_crew()
result = crew.kickoff(inputs={"application": app.model_dump()})
print(result)

The important part is not the print statement; it is the control boundary. In production you would persist inputs, result, timestamps, model versioning data if applicable tool traces to an audit store before any human makes the final call.

Production Considerations

  • Keep data residency explicit

    • Pension fund loan data often cannot leave approved jurisdictions.
    • Pin your LLM endpoints and vector stores to compliant regions only.
  • Add hard guardrails before the crew runs

    • Enforce sector exclusions, maximum exposure per borrower group, minimum collateral ratios in code.
    • Do not rely on prompts alone for policy enforcement.
  • Log every decision artifact

    • Store raw input payloads, task outputs, timestamps from Crew.kickoff(), and reviewer overrides.
    • Regulators will ask why a loan was approved; your logs need to answer that without reconstruction work.
  • Route uncertain cases to humans

    • Any missing KYC document, weak collateral coverage, or jurisdiction mismatch should trigger escalation.
    • Pension funds should use the agent as a decision support layer, not an autonomous lender.

Common Pitfalls

  1. Using one general-purpose agent for everything

    • This creates vague outputs and weak accountability.
    • Split policy review, risk analysis, and memo writing into separate Agent instances.
  2. Letting the LLM decide policy rules from scratch

    • The model will hallucinate thresholds if you do not encode them elsewhere.
    • Put non-negotiable rules in deterministic Python checks before Crew.kickoff().
  3. Skipping audit-ready output formats

    • Free-form prose is hard to defend in committee or regulatory review.
    • Require structured fields like recommendation, reasons, missing documents, exposure notes, and escalation status.
  4. Ignoring pension-fund-specific constraints

    • A good consumer lending flow is not enough here.
    • You need compliance checks for fiduciary duty, concentration limits across related borrowers، jurisdictional restrictions، and local data handling requirements.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides