How to Integrate FastAPI for fintech with LangChain for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
fastapi-for-fintechlangchainmulti-agent-systems

Combining FastAPI for fintech with LangChain gives you a clean way to expose regulated business logic as APIs while letting multiple agents coordinate on top of it. The practical win is simple: FastAPI handles request validation, auth, and transaction boundaries; LangChain handles orchestration, tool selection, and agent workflows.

For fintech teams, this is the difference between a chatbot that guesses and an agent system that can check balances, trigger payment workflows, fetch risk signals, and route decisions through controlled API endpoints.

Prerequisites

  • Python 3.10+
  • FastAPI installed
  • Uvicorn installed
  • LangChain installed
  • An LLM provider configured, such as OpenAI or Anthropic
  • A fintech backend or mock service with endpoints for:
    • account lookup
    • payment initiation
    • transaction status
  • Basic understanding of:
    • FastAPI()
    • path operations like @app.post()
    • LangChain tools via @tool
    • agent creation with create_tool_calling_agent() or initialize_agent()

Install the core packages:

pip install fastapi uvicorn langchain langchain-openai pydantic httpx

Integration Steps

1) Build the FastAPI fintech service

Start by exposing your fintech capabilities as explicit API endpoints. Keep the business logic behind typed request/response models so the agent never talks to raw internal services directly.

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import Literal

app = FastAPI(title="Fintech API")

class TransferRequest(BaseModel):
    from_account: str
    to_account: str
    amount: float
    currency: Literal["USD", "EUR", "GBP"]

class TransferResponse(BaseModel):
    transaction_id: str
    status: str

@app.get("/accounts/{account_id}")
def get_account(account_id: str):
    if account_id == "missing":
        raise HTTPException(status_code=404, detail="Account not found")
    return {
        "account_id": account_id,
        "balance": 12500.75,
        "currency": "USD",
        "status": "active"
    }

@app.post("/transfers", response_model=TransferResponse)
def create_transfer(payload: TransferRequest):
    return {
        "transaction_id": "tx_123456",
        "status": "queued"
    }

This is the contract your agents will call. In production, add auth middleware, idempotency keys, audit logging, and rate limits before exposing these routes.

2) Wrap FastAPI endpoints as LangChain tools

Now turn those API calls into LangChain tools. The agent should call a tool instead of inventing its own workflow.

import httpx
from langchain_core.tools import tool

BASE_URL = "http://localhost:8000"

@tool
def fetch_account_balance(account_id: str) -> str:
    """Fetch account details from the fintech API."""
    response = httpx.get(f"{BASE_URL}/accounts/{account_id}", timeout=10)
    response.raise_for_status()
    data = response.json()
    return f"Account {data['account_id']} has balance {data['balance']} {data['currency']}"

@tool
def initiate_transfer(from_account: str, to_account: str, amount: float, currency: str) -> str:
    """Initiate a transfer through the fintech API."""
    payload = {
        "from_account": from_account,
        "to_account": to_account,
        "amount": amount,
        "currency": currency,
    }
    response = httpx.post(f"{BASE_URL}/transfers", json=payload, timeout=10)
    response.raise_for_status()
    data = response.json()
    return f"Transfer queued with transaction id {data['transaction_id']}"

Keep the tools narrow. One tool should do one thing well. That makes multi-agent routing predictable and easier to secure.

3) Create a LangChain multi-agent workflow

For multi-agent systems, split responsibilities. One agent can be a “banking assistant,” another can be a “risk reviewer,” and a coordinator can decide which tool to invoke.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents import create_tool_calling_agent, AgentExecutor

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a fintech operations assistant. Use tools for any account or transfer action."),
    ("human", "{input}"),
])

tools = [fetch_account_balance, initiate_transfer]

agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

If you need true multi-agent coordination, you can split tool sets by role and route requests between executors. For example:

  • Agent A handles account lookup.
  • Agent B handles payment initiation.
  • Agent C reviews policy/risk before execution.

4) Expose the agent through FastAPI

Put the agent behind an endpoint so your frontend or internal systems can call it like any other service.

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI(title="Agent Gateway")

class QueryRequest(BaseModel):
    message: str

@app.post("/agent/query")
async def query_agent(req: QueryRequest):
    result = await executor.ainvoke({"input": req.message})
    return {"response": result["output"]}

This pattern keeps the LLM layer isolated from clients. Your application calls one stable endpoint; the agent decides whether to fetch balances or initiate transfers.

5) Add guardrails for fintech workflows

Do not let the model execute high-risk actions without checks. Put policy validation in your API layer before calling downstream services.

from fastapi import Depends, Header, HTTPException

def require_api_key(x_api_key: str = Header(...)):
    if x_api_key != "secure-key":
        raise HTTPException(status_code=401, detail="Unauthorized")

@app.post("/transfers/approved")
def approved_transfer(payload: TransferRequest, _: None = Depends(require_api_key)):
    if payload.amount > 5000:
        raise HTTPException(status_code=403, detail="Manual review required")
    return {"transaction_id": "tx_789", "status": "approved"}

Use this pattern for:

  • KYC/AML checks
  • transaction thresholds
  • role-based access control
  • audit trails for every agent action

Testing the Integration

Run FastAPI first:

uvicorn main:app --reload --port 8000

Then test the agent endpoint with a simple request:

import httpx

payload = {
    "message": "Check account ACC123 and transfer 250 USD from ACC123 to ACC999."
}

response = httpx.post(
    "http://localhost:8000/agent/query",
    json=payload,
)

print(response.status_code)
print(response.json())

Expected output:

200
{'response': 'Transfer queued with transaction id tx_123456'}

If you want deeper verification, test each layer independently:

  • /accounts/{account_id} returns valid account data
  • /transfers returns a transaction ID
  • /agent/query correctly selects tools based on user intent

Real-World Use Cases

  • Customer support ops

    • Let an agent answer balance questions, explain failed payments, and open internal tickets through FastAPI-backed services.
  • Treasury automation

    • Use one agent to inspect cash positions and another to trigger approved transfers after policy checks pass.
  • Fraud and risk triage

    • Have one agent collect account context while another queries risk signals before escalating suspicious activity.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides