How to Integrate Anthropic for banking with Cloudflare Workers for AI agents
Combining Anthropic for banking with Cloudflare Workers gives you a practical pattern for building AI agents that sit close to your users and still call a model with strong reasoning. The useful part is not just chat; it’s low-latency orchestration at the edge, which matters when your agent is checking balances, classifying disputes, or routing customer requests under strict response-time and compliance constraints.
Prerequisites
- •Python 3.10+
- •An Anthropic API key
- •A Cloudflare account
- •A Cloudflare Workers project already created
- •
wranglerinstalled and authenticated - •
requestsinstalled in your Python environment - •Basic familiarity with HTTP APIs and JSON payloads
Install the Python dependency:
pip install requests anthropic
Set your environment variables:
export ANTHROPIC_API_KEY="your-anthropic-key"
export CLOUDFLARE_WORKER_URL="https://your-worker.your-subdomain.workers.dev"
Integration Steps
1) Build the Anthropic client in Python
Start by creating a small service layer that talks to Anthropic directly. In banking workflows, keep this layer narrow: one function for classification, one for extraction, one for response generation.
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
def classify_banking_request(user_text: str) -> str:
message = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=200,
temperature=0,
messages=[
{
"role": "user",
"content": f"""
Classify this banking request into one of:
- balance_inquiry
- card_dispute
- transfer_request
- fraud_alert
- general_support
Request: {user_text}
Return only the label.
"""
}
],
)
return message.content[0].text.strip()
This uses the Anthropic Messages API via client.messages.create(...). For agent systems, zero temperature and constrained labels are the right default.
2) Create a Cloudflare Worker endpoint for edge orchestration
Your Worker should act as the thin control plane. It receives user input, applies policy checks, and forwards approved requests to your Python service or directly to downstream systems.
If you’re calling the Worker from Python during development, keep the Worker contract simple:
export default {
async fetch(request, env) {
const body = await request.json();
if (!body.user_text) {
return new Response(JSON.stringify({ error: "user_text is required" }), {
status: 400,
headers: { "Content-Type": "application/json" },
});
}
return new Response(JSON.stringify({
received: true,
user_text: body.user_text,
route: "anthropic_classification"
}), {
headers: { "Content-Type": "application/json" },
});
},
};
Deploy it with Wrangler:
wrangler deploy
For production banking flows, this Worker is where you enforce request size limits, auth checks, rate limits, and regional routing before any model call happens.
3) Call the Cloudflare Worker from Python
Now wire Python to the Worker using requests. This gives you an edge-first entry point for your AI agent system.
import os
import requests
WORKER_URL = os.environ["CLOUDFLARE_WORKER_URL"]
def send_to_worker(user_text: str) -> dict:
response = requests.post(
WORKER_URL,
json={"user_text": user_text},
timeout=10,
)
response.raise_for_status()
return response.json()
At this point you have a clean split:
- •Cloudflare Workers handles ingress and policy gates
- •Python handles Anthropic calls and business logic
That separation keeps your agent architecture maintainable.
4) Orchestrate Worker output with Anthropic in Python
Use the Worker as a pre-processing layer. The worker can normalize input or attach metadata, then Python sends that context into Anthropic for final reasoning.
import os
import requests
from anthropic import Anthropic
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
WORKER_URL = os.environ["CLOUDFLARE_WORKER_URL"]
def handle_banking_agent_request(user_text: str) -> str:
worker_result = requests.post(
WORKER_URL,
json={"user_text": user_text},
timeout=10,
).json()
classification = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=150,
temperature=0,
messages=[
{
"role": "user",
"content": f"""
You are a banking assistant.
The edge router returned:
{worker_result}
Classify and draft the next action for this request:
{user_text}
Return JSON with keys: label, next_action.
"""
}
],
)
return classification.content[0].text.strip()
This pattern works well when you need deterministic routing before invoking model reasoning. In regulated environments, do not let the model decide whether a request is allowed; let Workers do that first.
5) Add a structured response path for downstream systems
For real agents, you want structured outputs that can be consumed by ticketing systems or internal APIs. Keep the final output machine-readable.
import json
def build_agent_response(user_text: str) -> dict:
result = handle_banking_agent_request(user_text)
try:
parsed = json.loads(result)
except json.JSONDecodeError:
parsed = {
"label": "general_support",
"next_action": result,
}
return parsed
This gives downstream services something predictable to work with. If you’re routing disputes or fraud cases into case management tools, structured JSON is non-negotiable.
Testing the Integration
Run a quick end-to-end check from Python:
if __name__ == "__main__":
sample = "I see an unknown card charge from last night."
worker_payload = send_to_worker(sample)
print("Worker:", worker_payload)
label = classify_banking_request(sample)
print("Anthropic label:", label)
Expected output:
Worker: {'received': True, 'user_text': 'I see an unknown card charge from last night.', 'route': 'anthropic_classification'}
Anthropic label: card_dispute
If the worker responds correctly and Anthropic returns the right label consistently across repeated runs, your integration is wired correctly.
Real-World Use Cases
- •
Fraud triage agents
Classify incoming customer reports at the edge, then use Anthropic to summarize intent and generate next-step instructions for fraud ops. - •
Card dispute assistants
Let Workers validate session context and request shape before passing details to Anthropic for dispute categorization and evidence collection prompts. - •
Banking support routers
Route balance inquiries, transfer questions, and loan support requests through a Worker-first pipeline so only approved cases reach model reasoning and backend tools.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit