How to Integrate CrewAI for retail banking with FastAPI for AI agents
Combining CrewAI for retail banking with FastAPI gives you a clean way to expose agent workflows as production HTTP endpoints. In practice, that means your banking AI can route customer requests, run policy-aware tasks, and return structured responses through a standard API your web app or internal systems can call.
Prerequisites
- •Python 3.10+
- •
fastapi - •
uvicorn - •
crewai - •Access to a CrewAI-compatible LLM provider configured via environment variables
- •A retail banking agent design ready to wire into a Crew and Tasks
- •Basic familiarity with REST APIs and Pydantic models
Install the packages:
pip install fastapi uvicorn crewai pydantic
Set your model credentials before running the app:
export OPENAI_API_KEY="your-key"
Integration Steps
1) Define the banking agent and task in CrewAI
Start by modeling one agent for retail banking operations. Keep the scope narrow: account support, payment guidance, product eligibility, or case triage.
from crewai import Agent, Task, Crew, Process
banking_agent = Agent(
role="Retail Banking Assistant",
goal="Help customers with retail banking queries while following policy constraints.",
backstory=(
"You work in a regulated retail banking environment. "
"You must be concise, accurate, and escalate risky requests."
),
verbose=True,
allow_delegation=False,
)
support_task = Task(
description=(
"Answer the customer's retail banking question and return a clear action summary. "
"If the request involves fraud, disputes, or sensitive account actions, flag it for human review."
),
expected_output="A short customer response with next steps or escalation notes.",
agent=banking_agent,
)
crew = Crew(
agents=[banking_agent],
tasks=[support_task],
process=Process.sequential,
)
This is the core CrewAI object graph. The Agent, Task, and Crew classes are what you’ll call from FastAPI.
2) Create a FastAPI request/response contract
Define explicit schemas so your endpoint stays stable. Don’t pass raw dictionaries around unless you want brittle integrations later.
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI(title="Retail Banking AI Agent API")
class BankingRequest(BaseModel):
customer_id: str
query: str
class BankingResponse(BaseModel):
customer_id: str
answer: str
This gives you typed input validation at the edge of your system. For banking workflows, that matters because malformed payloads should fail before they hit the agent layer.
3) Wire CrewAI execution into a FastAPI endpoint
Now connect the endpoint to the crew execution path. In CrewAI, the standard pattern is calling crew.kickoff(inputs=...) and returning the result.
@app.post("/banking-assist", response_model=BankingResponse)
def banking_assist(payload: BankingRequest):
result = crew.kickoff(
inputs={
"customer_id": payload.customer_id,
"query": payload.query,
}
)
return BankingResponse(
customer_id=payload.customer_id,
answer=str(result),
)
If you want stricter control over what the model sees, inject only sanitized fields into inputs. In retail banking systems, avoid sending unnecessary PII to the agent.
4) Add basic guardrails for sensitive requests
For production use, add a pre-check before invoking the crew. This keeps obvious high-risk requests out of automated handling.
SENSITIVE_KEYWORDS = {"password", "otp", "pin", "fraud", "chargeback"}
@app.post("/banking-assist-safe", response_model=BankingResponse)
def banking_assist_safe(payload: BankingRequest):
query_lower = payload.query.lower()
if any(keyword in query_lower for keyword in SENSITIVE_KEYWORDS):
return BankingResponse(
customer_id=payload.customer_id,
answer="This request requires human review. Please contact support.",
)
result = crew.kickoff(
inputs={
"customer_id": payload.customer_id,
"query": payload.query,
}
)
return BankingResponse(
customer_id=payload.customer_id,
answer=str(result),
)
This is not full compliance logic, but it’s the right shape. Put deterministic policy checks before LLM calls whenever possible.
5) Run the API locally
Use Uvicorn to serve your FastAPI app.
uvicorn main:app --reload --host 0.0.0.0 --port 8000
At this point your AI agent system is reachable over HTTP and can be consumed by frontend apps, service desks, or orchestration layers.
Testing the Integration
Use curl or any HTTP client to verify the endpoint returns a structured response.
curl -X POST "http://127.0.0.1:8000/banking-assist-safe" \
-H "Content-Type: application/json" \
-d '{
"customer_id": "CUST-10021",
"query": "What documents do I need to open a savings account?"
}'
Expected output:
{
"customer_id": "CUST-10021",
"answer": "..."
}
For a sensitive query:
curl -X POST "http://127.0.0.1:8000/banking-assist-safe" \
-H "Content-Type: application/json" \
-d '{
"customer_id": "CUST-10021",
"query": "I forgot my PIN and need it now"
}'
Expected output:
{
"customer_id": "CUST-10021",
"answer": "This request requires human review. Please contact support."
}
Real-World Use Cases
- •Retail support triage
- •Route balance questions, card replacement requests, and account opening FAQs through an agent-backed API.
- •Policy-aware service automation
- •Use CrewAI agents to draft responses while FastAPI handles authentication, rate limits, and audit logging.
- •Back-office case classification
- •Classify incoming tickets for disputes, onboarding issues, or loan servicing before assigning them to teams.
If you want this pattern to survive production traffic, keep FastAPI thin and make CrewAI responsible only for reasoning and drafting. Put auth, validation, logging, and escalation rules at the API layer where they belong.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit