How to Build a loan approval Agent Using CrewAI in Python for payments

By Cyprian AaronsUpdated 2026-04-21
loan-approvalcrewaipythonpayments

A loan approval agent for payments takes applicant data, checks policy and risk rules, gathers supporting evidence, and returns a decision with a traceable rationale. In payments, this matters because approval is not just about speed; it directly affects fraud exposure, compliance, chargeback risk, and how much manual review your ops team has to absorb.

Architecture

  • Input normalizer

    • Converts application payloads from your payment stack into a stable internal schema.
    • Validates required fields like amount, merchant category, country, repayment source, and KYC status.
  • Policy evaluator

    • Applies hard rules before any LLM reasoning.
    • Blocks disallowed cases like missing consent, unsupported jurisdictions, or failed identity checks.
  • Risk analyst agent

    • Uses CrewAI to assess the application against underwriting signals.
    • Produces a structured recommendation: approve, decline, or manual review.
  • Compliance auditor agent

    • Checks the decision trail for PCI/PII handling, consent flags, and explainability.
    • Ensures the final output can be stored in an audit log.
  • Decision orchestrator

    • Runs the agents in sequence using Crew, Agent, Task, and Process.
    • Enforces deterministic routing around high-risk payment cases.
  • Audit sink

    • Persists inputs, outputs, model version, timestamps, and final decision.
    • Supports regulatory review and internal dispute resolution.

Implementation

1) Define the application schema and policy gate

Start by separating hard rules from agent reasoning. In payments, anything involving missing consent or blocked geographies should fail before the LLM gets involved.

from dataclasses import dataclass
from typing import Literal

Decision = Literal["approve", "decline", "manual_review"]

@dataclass
class LoanApplication:
    applicant_id: str
    amount: float
    currency: str
    country: str
    kyc_passed: bool
    consent_to_process_data: bool
    repayment_source_verified: bool
    merchant_category: str
    monthly_income: float
    existing_debt: float

def policy_gate(app: LoanApplication) -> Decision | None:
    blocked_countries = {"KP", "IR", "SY"}
    if not app.consent_to_process_data:
        return "decline"
    if not app.kyc_passed:
        return "manual_review"
    if app.country in blocked_countries:
        return "decline"
    if not app.repayment_source_verified:
        return "manual_review"
    return None

2) Build the CrewAI agents

Use one agent for risk analysis and one for compliance review. Keep their jobs narrow; that makes outputs easier to validate and easier to audit later.

from crewai import Agent

risk_analyst = Agent(
    role="Loan Risk Analyst",
    goal="Assess loan applications for payment-linked credit risk",
    backstory=(
        "You evaluate loan applications using payment behavior signals, "
        "income-to-debt ratio, KYC status, and repayment source quality."
    ),
    verbose=True,
)

compliance_auditor = Agent(
    role="Compliance Auditor",
    goal="Verify that the decision is compliant and auditable",
    backstory=(
        "You check that decisions respect consent requirements, PII handling, "
        "and regulatory constraints relevant to payments."
    ),
    verbose=True,
)

3) Define tasks with explicit outputs

CrewAI works best when tasks ask for structured output. For production systems, tell the model exactly what fields you need so you can validate them downstream.

from crewai import Task

risk_task = Task(
    description=(
        "Review this loan application and produce a recommendation.\n"
        "Application data:\n{application}\n\n"
        "Return:\n"
        "- decision: approve|decline|manual_review\n"
        "- reasons: bullet list\n"
        "- risk_score: integer 0-100\n"
        "- notes_for_audit: short paragraph"
    ),
    expected_output="A structured underwriting recommendation.",
    agent=risk_analyst,
)

compliance_task = Task(
    description=(
        "Review the underwriting recommendation for compliance issues.\n"
        "Focus on consent, data minimization, auditability, and payment-regulatory concerns.\n"
        "Return whether the recommendation is safe to release."
    ),
    expected_output="A compliance verdict with remediation notes if needed.",
    agent=compliance_auditor,
)

4) Orchestrate the crew and expose a single decision function

This is the actual pattern you want in a service layer. The policy gate handles hard blocks; CrewAI handles reasoned assessment; your code owns the final decision.

import json
from crewai import Crew, Process

def evaluate_application(app: LoanApplication) -> dict:
    gated = policy_gate(app)
    if gated:
      return {
            "applicant_id": app.applicant_id,
            "decision": gated,
            "source": "policy_gate",
            "reason": "Hard rule triggered before agent execution",
        }

    crew = Crew(
        agents=[risk_analyst, compliance_auditor],
        tasks=[risk_task, compliance_task],
        process=Process.sequential,
        verbose=True,
    )

    result = crew.kickoff(inputs={
        "application": json.dumps(app.__dict__, indent=2)
    })

    return {
        "applicant_id": app.applicant_id,
        "decision": "manual_review",
        "source": "crewai",
        "result": str(result),
    }

In practice, you’ll usually map the agent output into your own decision object after validating it. Do not let raw model text become your production decision without parsing and checking it first.

Production Considerations

  • Keep hard controls outside the model

    • Consent checks, jurisdiction blocks, KYC failures, and sanctions hits should be deterministic code.
    • The agent should recommend; your policy engine should decide when rules are absolute.
  • Log everything needed for audit

    • Store input payload hash, model/agent version, prompt text, task output, final decision, timestamp, and reviewer overrides.
    • For payments workflows this is non-negotiable when regulators ask why a customer was declined.
  • Control data residency

    • If applicant data includes PII or bank details, route processing to approved regions only.
    • Mask or tokenize sensitive fields before passing them into CrewAI tasks whenever possible.
  • Add human review on edge cases

    • High amounts, thin-file applicants, mismatched identity signals, or borderline affordability should go to manual review.
    • Use the agent to prepare a case summary for ops instead of auto-approving risky files.

Common Pitfalls

  • Letting the LLM make policy decisions

    • Mistake: asking the agent to decide on sanctions or consent violations.
    • Fix: implement those as pre-checks in Python before calling Crew.kickoff().
  • Passing raw PII into every task

    • Mistake: sending full account numbers, card data, or unnecessary identity fields to all agents.
    • Fix: minimize payloads per task and redact sensitive values before orchestration.
  • Treating free-text output as machine-safe

    • Mistake: reading result directly and turning it into an approval.
    • Fix: require structured fields in task prompts and validate them with your own parser before releasing any decision.
  • Ignoring regional compliance constraints

    • Mistake: deploying one global model path for all applicants.
    • Fix: enforce region-aware routing so data stays within approved jurisdictions and local retention rules are respected.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides