How to Build a claims processing Agent Using CrewAI in Python for lending
A claims processing agent for lending takes incoming borrower claims, extracts the relevant facts, checks them against policy and loan terms, and routes the case for approval, denial, or manual review. It matters because claims in lending are high-risk operations: if you miss a compliance rule, mishandle borrower data, or fail to log the decision path, you create legal exposure and operational drag.
Architecture
For a lending claims agent, keep the system small and auditable:
- •Claim intake layer
- •Receives claim text, uploaded documents, and metadata like loan ID, jurisdiction, and product type.
- •Policy retrieval layer
- •Pulls the relevant lending policy, servicing rules, and regulatory constraints before any decision is made.
- •Claims analysis agent
- •Extracts entities, checks eligibility, and classifies the claim based on policy.
- •Compliance review agent
- •Verifies that the proposed outcome does not violate lending rules, disclosure requirements, or internal controls.
- •Decision orchestrator
- •Combines outputs from agents and returns one of: approve, deny, escalate.
- •Audit logging layer
- •Stores inputs, outputs, tool calls, timestamps, model version, and final rationale for review.
Implementation
1) Define tools for policy lookup and audit logging
CrewAI works best when you keep tools narrow. For lending claims, one tool should retrieve policy context and another should persist an audit trail.
from crewai import Agent, Task, Crew
from crewai.tools import BaseTool
from pydantic import BaseModel
from typing import Type
import json
from datetime import datetime
class PolicyLookupInput(BaseModel):
loan_type: str
jurisdiction: str
class PolicyLookupTool(BaseTool):
name: str = "policy_lookup"
description: str = "Fetch lending policy rules for a given loan type and jurisdiction."
args_schema: Type[BaseModel] = PolicyLookupInput
def _run(self, loan_type: str, jurisdiction: str) -> str:
# Replace with database or vector store lookup
return json.dumps({
"loan_type": loan_type,
"jurisdiction": jurisdiction,
"required_documents": ["claim_form", "loan_statement", "supporting_evidence"],
"decision_rules": [
"Claims older than 90 days require manual review",
"Missing required documents => escalate",
"Borrower identity mismatch => deny pending investigation"
]
})
class AuditLogInput(BaseModel):
claim_id: str
decision: str
rationale: str
class AuditLogTool(BaseTool):
name: str = "audit_log"
description: str = "Write claim decisions to an immutable audit store."
args_schema: Type[BaseModel] = AuditLogInput
def _run(self, claim_id: str, decision: str, rationale: str) -> str:
record = {
"claim_id": claim_id,
"decision": decision,
"rationale": rationale,
"timestamp": datetime.utcnow().isoformat()
}
# Replace with append-only storage like S3 Object Lock / WORM DB / SIEM
print(json.dumps(record))
return "logged"
2) Create specialized agents
Use one agent for claims analysis and another for compliance. Keep their instructions explicit so they do not improvise around lending rules.
claims_agent = Agent(
role="Claims Analyst",
goal="Analyze lending claims against policy and extract a defensible recommendation.",
backstory="You review borrower claims for servicing teams and produce structured recommendations.",
tools=[PolicyLookupTool()],
verbose=True,
)
compliance_agent = Agent(
role="Compliance Reviewer",
goal="Validate that claim decisions follow lending policy and regulatory constraints.",
backstory="You ensure outcomes are compliant with internal controls and lending regulations.",
verbose=True,
)
3) Build tasks with clear outputs
The first task should summarize facts from the claim. The second should validate the recommendation against policy. In production, make the output structured JSON so downstream systems can consume it safely.
claim_analysis_task = Task(
description=(
"Review this lending claim:\n"
"- Claim ID: CLM-10291\n"
"- Loan type: personal_loan\n"
"- Jurisdiction: US-NY\n"
"- Claim text: Borrower says payment was misapplied after autopay failure.\n"
"- Documents provided: claim_form, loan_statement\n\n"
"Use the policy_lookup tool to determine required documents and decision rules. "
"Return a concise recommendation with one of approve/deny/escalate."
),
expected_output="A structured recommendation with decision and rationale.",
agent=claims_agent,
)
compliance_task = Task(
description=(
"Review the proposed claim outcome for CLM-10291. "
"Check for compliance issues specific to lending operations such as missing evidence, "
"jurisdictional handling requirements, auditability, and escalation triggers. "
"Return final decision JSON with fields decision and rationale."
),
expected_output="Final compliance-approved decision in JSON format.",
agent=compliance_agent,
)
4) Run the crew and persist the result
For a real workflow, run analysis first and then write the result into your audit store. If you need stricter control over sequencing later, use Process.sequential.
crew = Crew(
agents=[claims_agent, compliance_agent],
tasks=[claim_analysis_task, compliance_task],
verbose=True,
)
result = crew.kickoff()
audit_tool = AuditLogTool()
audit_tool._run(
claim_id="CLM-10291",
decision="escalate",
rationale=str(result)
)
print(result)
If you want tighter orchestration across multiple claims or document types later:
from crewai import Process
crew = Crew(
agents=[claims_agent, compliance_agent],
tasks=[claim_analysis_task, compliance_task],
process=Process.sequential,
)
Production Considerations
- •Data residency
- •Keep borrower PII inside approved regions. If your claims data must stay in-country or in-region, pin model endpoints and storage to that boundary.
- •Auditability
- •Persist every prompt input, tool response, final output, model version, and timestamp. Lending teams will need this during disputes and regulator reviews.
- •Guardrails
- •Block unsupported decisions like “approve because it seems fair.” Force structured outputs with allowed values only:
approve,deny,escalate.
- •Block unsupported decisions like “approve because it seems fair.” Force structured outputs with allowed values only:
- •Monitoring
- •Track escalation rate, false denials, missing-document frequency, latency by jurisdiction، ۽ tool failure rate. Spikes usually point to bad intake data or policy drift.
Common Pitfalls
- •Using a single general-purpose agent for everything
- •This leads to inconsistent decisions. Split analysis from compliance so each step has one job.
- •Letting the model decide without policy context
- •In lending you need deterministic rules first. Always retrieve product-specific policy before generating a recommendation.
- •Skipping immutable logs
- •If you cannot reconstruct why a claim was escalated or denied later، you have an operational problem. Write every decision to append-only storage.
- •Returning free-form text to downstream systems
- •Downstream servicing platforms need machine-readable results. Use JSON-shaped outputs with fixed keys.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit