How to Integrate FastAPI for payments with LangChain for AI agents
Combining FastAPI for payments with LangChain gives you a clean way to let an AI agent trigger real payment workflows without turning your agent into a payment processor. FastAPI handles the API boundary, validation, and async I/O; LangChain handles the reasoning, tool selection, and orchestration around when a payment should happen.
This is useful when you want an agent to create invoices, charge cards after user approval, check payment status, or route a refund request into a controlled backend workflow.
Prerequisites
- •Python 3.10+
- •A FastAPI app with payment endpoints already defined
- •A payment provider SDK or API client behind your FastAPI service
- •LangChain installed:
- •
langchain - •
langchain-openaior another model provider
- •
- •
uvicornfor running the FastAPI server - •
httpxfor calling your FastAPI payment API from the agent layer - •Environment variables configured:
- •
OPENAI_API_KEYor equivalent LLM key - •Payment provider credentials if your FastAPI service talks to Stripe, Adyen, etc.
- •
- •Basic understanding of:
- •FastAPI request/response models
- •LangChain tools and agents
Integration Steps
1) Expose payment operations through FastAPI
Keep payments behind explicit API routes. Do not let the agent talk directly to your card processor; it should call your own service.
# payments_api.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field
from typing import Literal
app = FastAPI(title="Payments API")
class ChargeRequest(BaseModel):
customer_id: str = Field(..., examples=["cus_123"])
amount_cents: int = Field(..., gt=0)
currency: str = Field(default="usd")
description: str | None = None
class ChargeResponse(BaseModel):
payment_id: str
status: Literal["succeeded", "pending", "failed"]
@app.post("/payments/charge", response_model=ChargeResponse)
async def charge_payment(payload: ChargeRequest):
# Replace with Stripe/Adyen/Braintree SDK call in production.
if payload.amount_cents > 500000:
raise HTTPException(status_code=400, detail="Amount exceeds limit")
return ChargeResponse(payment_id="pay_abc123", status="succeeded")
Run it:
uvicorn payments_api:app --reload --port 8000
2) Wrap the FastAPI endpoint as a LangChain tool
LangChain tools are the bridge. The agent decides when to call them; the tool sends the request to FastAPI.
# agent_tools.py
import httpx
from langchain_core.tools import tool
PAYMENTS_BASE_URL = "http://localhost:8000"
@tool
def charge_customer(customer_id: str, amount_cents: int, currency: str = "usd") -> str:
"""Charge a customer via the payments API."""
payload = {
"customer_id": customer_id,
"amount_cents": amount_cents,
"currency": currency,
"description": "Agent-generated charge",
}
response = httpx.post(f"{PAYMENTS_BASE_URL}/payments/charge", json=payload, timeout=10.0)
response.raise_for_status()
data = response.json()
return f"payment_id={data['payment_id']}, status={data['status']}"
This keeps your agent stateless and makes payment execution observable through normal API logs.
3) Build a LangChain agent that can call the payment tool
Use a chat model plus tools. The model decides whether charging is appropriate based on user intent.
# agent.py
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from agent_tools import charge_customer
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a billing assistant. Only charge when the user explicitly approves."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
tools = [charge_customer]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({
"input": "The customer approved the $25 subscription renewal. Charge cus_123."
})
print(result["output"])
The important part here is policy control. The system prompt should enforce approval rules before any tool execution.
4) Add an approval gate before executing payments
For banking and insurance workflows, don’t let free-form text trigger money movement without deterministic checks.
# guardrails.py
def requires_explicit_approval(user_text: str) -> bool:
keywords = ["approved", "confirm", "authorize", "go ahead"]
return any(word in user_text.lower() for word in keywords)
def safe_charge(user_text: str, executor):
if not requires_explicit_approval(user_text):
return "Payment blocked: explicit approval required."
return executor.invoke({"input": user_text})["output"]
Use this gate before calling the LangChain executor. In production, pair it with identity checks and transaction limits.
5) Return structured results back into your app flow
Your downstream app should receive structured output so it can update order state or notify the user.
# app_integration.py
from pydantic import BaseModel
class BillingResult(BaseModel):
message: str
payment_id: str | None = None
status: str | None = None
def parse_agent_output(output: str) -> BillingResult:
# Simple parsing for demo purposes; use structured output in production.
parts = dict(item.split("=") for item in output.split(", "))
return BillingResult(
message="Payment processed",
payment_id=parts.get("payment_id"),
status=parts.get("status"),
)
For production systems, prefer LangChain structured output or function-calling patterns instead of string parsing.
Testing the Integration
Start the FastAPI server first, then run this script to verify the full path from agent to payment API.
# test_integration.py
from agent import executor
response = executor.invoke({
"input": "Customer approved the renewal. Charge cus_123 for $25."
})
print(response["output"])
Expected output:
> Entering new AgentExecutor chain...
Invoking tool: charge_customer with {'customer_id': 'cus_123', 'amount_cents': 2500, 'currency': 'usd'}
payment_id=pay_abc123, status=succeeded
> Finished chain.
If you want a stricter test suite, mock httpx.post and assert that /payments/charge was called with the correct payload.
Real-World Use Cases
- •Subscription billing assistants
- •An AI agent can renew subscriptions after user approval and send failed-payment follow-ups.
- •Claims and reimbursement workflows
- •The agent can collect claim details, validate policy context, then trigger payout requests through FastAPI.
- •Invoice collection copilots
- •The assistant can generate reminders, negotiate terms within policy limits, and initiate charges or refunds through controlled endpoints.
The pattern is simple: keep money movement in FastAPI services you control, and let LangChain decide when to invoke those services. That separation gives you auditability, policy enforcement, and room to scale without turning your agent into an unsafe monolith.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit