How to Integrate FastAPI for fintech with LangChain for startups
Combining FastAPI for fintech with LangChain gives you a clean path to ship AI-powered financial workflows behind a proper API boundary. The pattern is simple: FastAPI handles auth, request validation, and service orchestration, while LangChain handles reasoning, tool use, and structured responses for agentic tasks like transaction review, KYC support, and customer-facing finance assistants.
Prerequisites
- •Python 3.10+
- •A FastAPI project set up with
uvicorn - •LangChain installed
- •An LLM provider configured, such as OpenAI or Anthropic
- •Pydantic v2
- •Basic familiarity with REST APIs and async Python
- •A
.envfile with your model API key
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic python-dotenv
Integration Steps
- •
Create the FastAPI app and request schema
Start by defining a stable API contract. In fintech, this matters because downstream systems want typed inputs, not free-form prompts.
from fastapi import FastAPI
from pydantic import BaseModel, Field
app = FastAPI(title="Fintech AI Agent API")
class FinanceQuery(BaseModel):
customer_id: str = Field(..., examples=["cust_123"])
query: str = Field(..., examples=["Summarize the last 3 card transactions"])
- •
Build a LangChain pipeline for the finance task
Use LangChain to turn user input into a structured response. For startup use cases, keep the chain focused on one job per endpoint.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
temperature=0
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a fintech assistant. Be concise, accurate, and avoid making up financial data."),
("user", "Customer ID: {customer_id}\nRequest: {query}")
])
chain = prompt | llm
- •
Expose the chain through a FastAPI endpoint
This is where the integration becomes useful. FastAPI receives the request, LangChain generates the answer, and your API returns JSON that other services can consume.
from fastapi import HTTPException
@app.post("/finance/assistant")
async def finance_assistant(payload: FinanceQuery):
try:
result = await chain.ainvoke({
"customer_id": payload.customer_id,
"query": payload.query,
})
return {
"customer_id": payload.customer_id,
"answer": result.content
}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
- •
Add tool access for fintech-specific actions
If your agent needs to fetch balances or transaction summaries, wrap those operations as tools. LangChain tools give you a controlled way to expose internal services without letting the model freestyle.
from langchain_core.tools import tool
@tool
def get_recent_transactions(customer_id: str) -> str:
# Replace with real DB/API call in production
return (
f"Recent transactions for {customer_id}: "
"Card payment $42.18 at Grocery Mart; ACH deposit $2,500; "
"Subscription $19.99 to SaaS Tool."
)
tools = [get_recent_transactions]
If you want the model to decide when to call tools, wire it into an agent:
from langchain.agents import create_tool_calling_agent, AgentExecutor
agent_prompt = ChatPromptTemplate.from_messages([
("system", "You are a fintech support agent. Use tools when needed."),
("user", "{input}")
])
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=agent_prompt)
executor = AgentExecutor(agent=agent, tools=tools)
@app.post("/finance/agent")
async def finance_agent(payload: FinanceQuery):
response = await executor.ainvoke({"input": f"{payload.customer_id}: {payload.query}"})
return {"customer_id": payload.customer_id, "answer": response["output"]}
- •
Run the service with production-friendly settings
Keep startup deployment simple first: Uvicorn behind a reverse proxy or container platform.
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
For real fintech workloads, add:
- •request authentication
- •audit logging
- •rate limits
- •timeout handling around LLM calls
- •redaction of sensitive fields before sending prompts
Testing the Integration
Use curl or any API client to verify the endpoint works end-to-end.
curl -X POST "http://localhost:8000/finance/assistant" \
-H "Content-Type: application/json" \
-d '{
"customer_id": "cust_123",
"query": "Summarize my recent transactions"
}'
Expected output:
{
"customer_id": "cust_123",
"answer": "Here is a concise summary of the customer’s recent activity..."
}
If you test the tool-enabled endpoint:
curl -X POST "http://localhost:8000/finance/agent" \
-H "Content-Type: application/json" \
-d '{
"customer_id": "cust_123",
"query": "What were my recent card transactions?"
}'
Expected output:
{
"customer_id": "cust_123",
"answer": "Recent transactions for cust_123: Card payment $42.18 at Grocery Mart; ACH deposit $2,500; Subscription $19.99 to SaaS Tool."
}
Real-World Use Cases
- •
Customer support copilot
- •Answer balance questions, transaction summaries, fee explanations, and account-status queries through an authenticated API.
- •
Internal operations assistant
- •Help ops teams triage disputes, summarize case notes, and generate next-step recommendations from structured backend data.
- •
Compliance review workflows
- •Build endpoints that summarize suspicious activity patterns or prepare analyst-ready narratives from transaction logs.
The main pattern here is boring in the best way: FastAPI gives you a reliable service layer, and LangChain gives you controlled agent behavior on top of it. That combination is what startups need when they want AI features without turning their backend into an untestable prompt pile.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit