How to Build a underwriting Agent Using CrewAI in Python for pension funds
A underwriting agent for pension funds reviews contribution history, member profile, investment constraints, and policy rules to decide whether an application, transfer, or benefit action should move forward, be escalated, or be rejected. It matters because pension operations are high-volume, rules-heavy, and audit-sensitive: bad decisions create compliance exposure, member harm, and expensive manual rework.
Architecture
Build this agent as a small workflow, not a single prompt:
- •
Intake layer
- •Parses structured inputs from CRM, policy admin systems, KYC files, and contribution records.
- •Normalizes fields like member age, employer status, fund type, jurisdiction, and requested action.
- •
Policy retrieval layer
- •Pulls pension scheme rules, regulator guidance, and internal underwriting thresholds.
- •Keeps the agent grounded in current documents instead of relying on model memory.
- •
Decision agent
- •Applies underwriting logic to the normalized case.
- •Produces one of three outputs: approve, reject, or escalate with reasons.
- •
Compliance checker
- •Validates that the recommendation respects AML/KYC flags, sanctions checks, contribution limits, vesting rules, and local retirement law constraints.
- •Forces escalation when evidence is incomplete.
- •
Audit logger
- •Stores the input snapshot, retrieved policy excerpts, final decision, and reasoning trail.
- •Gives you traceability for internal audit and regulator review.
- •
Human review queue
- •Captures borderline cases.
- •Prevents automated approval when data quality is poor or policy ambiguity is high.
Implementation
1) Install CrewAI and define the case structure
Start with a typed payload so your underwriting flow does not depend on loose dictionaries everywhere. For pension funds, this is where you keep jurisdiction and compliance context explicit.
from pydantic import BaseModel
from typing import Optional
class UnderwritingCase(BaseModel):
member_id: str
fund_name: str
jurisdiction: str
request_type: str
age: int
employer_status: str
contribution_history_years: int
aml_flag: bool = False
kyc_complete: bool = True
residency_country: Optional[str] = None
2) Create CrewAI agents with clear responsibilities
Use separate Agent objects for policy review and compliance validation. That keeps each role narrow and makes it easier to test.
from crewai import Agent
policy_agent = Agent(
role="Pension Policy Analyst",
goal="Assess underwriting cases against pension fund policy and scheme rules",
backstory=(
"You review pension fund applications using documented scheme rules, "
"member eligibility criteria, and jurisdiction-specific restrictions."
),
verbose=True,
)
compliance_agent = Agent(
role="Pension Compliance Reviewer",
goal="Identify regulatory risks and force escalation when evidence is incomplete",
backstory=(
"You check AML/KYC status, residency constraints, auditability requirements "
"and local pension regulations before any recommendation is finalized."
),
verbose=True,
)
3) Define tasks that produce decision-ready outputs
Keep the output format strict. For production use in pensions you want a recommendation plus rationale plus flags for human review.
from crewai import Task
policy_task = Task(
description=(
"Review this pension underwriting case: {case}. "
"Determine whether the request satisfies fund policy. "
"Return a concise recommendation with reasons and any missing information."
),
expected_output="A decision summary with approve/reject/escalate and reasons.",
agent=policy_agent,
)
compliance_task = Task(
description=(
"Validate the same case for compliance issues including AML/KYC gaps, "
"jurisdiction restrictions, residency concerns, and audit risk. "
"If anything is ambiguous or missing, recommend escalation."
),
expected_output="A compliance assessment with risk flags and escalation guidance.",
agent=compliance_agent,
)
4) Run the crew and persist the result for audit
This pattern uses Crew, Process.sequential, and kickoff. In a real system you would attach retrieval tools for your policy repository and write the output to an immutable audit store.
from crewai import Crew, Process
def underwrite(case: UnderwritingCase):
crew = Crew(
agents=[policy_agent, compliance_agent],
tasks=[policy_task, compliance_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff(inputs={"case": case.model_dump()})
return {
"member_id": case.member_id,
"jurisdiction": case.jurisdiction,
"request_type": case.request_type,
"crew_result": str(result),
}
if __name__ == "__main__":
sample_case = UnderwritingCase(
member_id="M12345",
fund_name="Northstar Pension Fund",
jurisdiction="ZA",
request_type="benefit_transfer_review",
age=54,
employer_status="active",
contribution_history_years=8,
aml_flag=False,
kyc_complete=True,
residency_country="South Africa",
)
decision = underwrite(sample_case)
print(decision)
A good next step is adding tools for policy retrieval. In CrewAI that usually means giving an Agent a tool that queries your document store or vector index so decisions cite current scheme rules instead of hardcoded assumptions.
Production Considerations
- •
Data residency
- •Pension data often cannot leave a specific country or region.
- •Keep model endpoints, vector stores, logs, and backups inside approved jurisdictions.
- •
Auditability
- •Store every
kickoffinput payload plus retrieved policy text plus final output. - •Make sure auditors can reconstruct why a member was escalated or rejected.
- •Store every
- •
Guardrails
- •Force escalation when
kyc_complete=False,aml_flag=True, or residency is unknown. - •Never let the agent override mandatory scheme rules without human approval.
- •Force escalation when
- •
Monitoring
- •Track approval rate by jurisdiction, false escalation rate, missing-data frequency, and manual override rate.
- •If one fund or region starts producing many escalations, your upstream data quality is probably broken.
Common Pitfalls
- •
Using one agent for everything
- •This mixes policy interpretation with compliance enforcement.
- •Split responsibilities into separate agents so you can test each behavior independently.
- •
Letting the model decide without source documents
- •Pension underwriting needs current scheme rules.
- •Attach retrieval tools or preloaded policy context; otherwise you get plausible but unusable answers.
- •
Ignoring exception handling
- •Real cases will have missing KYC fields, inconsistent residency data, or outdated contribution histories.
- •Treat missing evidence as an escalation trigger instead of trying to “infer” your way through it.
- •
Skipping human review on edge cases
- •Borderline pension decisions are where most operational risk lives.
- •Add a mandatory review queue for ambiguous cases rather than auto-finalizing them.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit