How to Integrate Anthropic for banking with Cloudflare Workers for multi-agent systems
Why this integration matters
If you’re building multi-agent systems for banking, Anthropic gives you the reasoning layer and Cloudflare Workers gives you the edge runtime. That combination is useful when one agent needs to classify a customer request, another needs to fetch policy data, and a third needs to execute a low-latency workflow close to the user.
In practice, this lets you run banking assistants that stay responsive, keep sensitive logic at the edge, and coordinate multiple agents without dragging every request through a central app server.
Prerequisites
- •Python 3.10+
- •An Anthropic API key
- •A Cloudflare account
- •A Cloudflare Workers project already created
- •
wranglerinstalled and authenticated - •
requestsinstalled for Python HTTP calls - •Basic familiarity with JSON payloads and REST APIs
Install the Python dependency:
pip install anthropic requests
Set your environment variables:
export ANTHROPIC_API_KEY="your_anthropic_key"
export CLOUDFLARE_ACCOUNT_ID="your_account_id"
export CLOUDFLARE_API_TOKEN="your_cloudflare_api_token"
export WORKER_NAME="banking-agent-router"
Integration Steps
- •
Create the Anthropic client and define the agent task
For banking workflows, keep prompts narrow and structured. Your first agent should usually do routing or classification, not final decisions.
import os
import anthropic
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
def classify_request(customer_message: str) -> str:
msg = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=200,
temperature=0,
messages=[
{
"role": "user",
"content": f"""
Classify this banking request into one of:
- balance_inquiry
- card_dispute
- loan_question
- fraud_alert
- other
Return only the label.
Message: {customer_message}
"""
}
],
)
return msg.content[0].text.strip()
- •
Expose a Cloudflare Worker as the orchestration layer
The Worker becomes your low-latency entry point for agent coordination. In production, it can route to different internal services or invoke downstream tools based on Anthropic’s output.
Deploy a Worker that accepts JSON input and returns a normalized response:
import requests
import os
ACCOUNT_ID = os.environ["CLOUDFLARE_ACCOUNT_ID"]
API_TOKEN = os.environ["CLOUDFLARE_API_TOKEN"]
WORKER_NAME = os.environ["WORKER_NAME"]
def deploy_worker_script(script: str):
url = f"https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/workers/scripts/{WORKER_NAME}"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/javascript",
}
resp = requests.put(url, headers=headers, data=script.encode("utf-8"))
resp.raise_for_status()
return resp.json()
worker_script = """
export default {
async fetch(request) {
const body = await request.json();
return Response.json({
received: true,
route: body.route,
customer_id: body.customer_id,
status: "queued"
});
}
}
"""
result = deploy_worker_script(worker_script)
print(result["success"])
- •
Call the Worker from Python after Anthropic classifies the request
This is the core integration pattern: Anthropic decides what kind of banking task it is, then Cloudflare Workers handles edge execution or fan-out to other agents.
import requests
def send_to_worker(route: str, customer_id: str, payload: dict):
worker_url = f"https://{WORKER_NAME}.workers.dev"
response = requests.post(
worker_url,
json={
"route": route,
"customer_id": customer_id,
"payload": payload,
},
timeout=10,
)
response.raise_for_status()
return response.json()
customer_message = "I see an unknown card charge from last night."
route = classify_request(customer_message)
worker_response = send_to_worker(
route=route,
customer_id="cust_12345",
payload={
"message": customer_message,
"priority": "high",
},
)
print(worker_response)
- •
Add multi-agent handoff logic in Python
In a real system, one agent classifies, another drafts a response, and a third validates policy constraints. Keep each agent’s responsibility explicit.
def draft_reply(customer_message: str, route: str) -> str:
prompt = f"""
You are a banking support agent.
Task type: {route}
Draft a short internal note for the next agent.
Customer message: {customer_message}
"""
msg = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=150,
temperature=0,
messages=[{"role": "user", "content": prompt}],
)
return msg.content[0].text.strip()
def orchestrate(customer_message: str):
route = classify_request(customer_message)
note = draft_reply(customer_message, route)
result = send_to_worker(
route=route,
customer_id="cust_12345",
payload={"note": note},
)
return {"route": route, "note": note, "worker_result": result}
print(orchestrate("My debit card was declined twice today."))
- •
Use Cloudflare Workers for edge-side fan-out
When you have multiple agents behind different services, let Workers act as the dispatcher. The Worker can forward to separate endpoints based on the route returned by Anthropic.
def update_worker_router():
script = """
export default {
async fetch(request) {
const body = await request.json();
const routes = {
fraud_alert: "https://fraud-service.internal/agent",
card_dispute: "https://disputes-service.internal/agent",
balance_inquiry: "https://accounts-service.internal/agent"
};
const target = routes[body.route] || "https://general-service.internal/agent";
return Response.json({
routed_to: target,
route: body.route,
customer_id: body.customer_id
});
}
}
"""
return deploy_worker_script(script)
print(update_worker_router()["success"])
Testing the Integration
Run an end-to-end check with one request through Anthropic and one hop through Cloudflare Workers.
def test_flow():
message = "I want to dispute a card payment from yesterday."
route = classify_request(message)
assert route in {"balance_inquiry", "card_dispute", "loan_question", "fraud_alert", "other"}
worker_result = send_to_worker(
route=route,
customer_id="cust_test_001",
payload={"message": message},
)
test_flow()
print("Integration OK")
Expected output:
Integration OK
If you print intermediate values, you should see something like:
card_dispute
{'received': True, 'route': 'card_dispute', 'customer_id': 'cust_test_001', 'status': 'queued'}
Real-World Use Cases
- •
Fraud triage pipeline
- •Anthropic classifies incoming alerts.
- •Cloudflare Workers routes high-risk cases to an investigation service at the edge.
- •
Customer service orchestration
- •One agent summarizes intent.
- •Another drafts policy-safe responses.
- •Workers dispatches tasks to specialized backend services.
- •
Loan intake pre-processing
- •Anthropic extracts missing fields from an application.
- •Workers fans out validation checks across underwriting and document verification agents.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit