How to Integrate FastAPI for healthcare with LangChain for AI agents
FastAPI gives you a clean, typed API layer for healthcare workflows. LangChain gives you the orchestration layer for AI agents that can reason over clinical data, call tools, and return structured outputs.
Put them together and you get a backend that can expose patient-safe endpoints while an agent handles triage, summarization, prior-auth support, or clinical note routing.
Prerequisites
- •Python 3.10+
- •
fastapi - •
uvicorn - •
langchain - •
langchain-openaior another LangChain model provider - •
pydantic - •A valid LLM API key in your environment
- •A basic FastAPI app already running or ready to run
- •If you are handling healthcare data:
- •HIPAA-compliant infrastructure
- •PHI redaction strategy
- •Audit logging enabled
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic
Integration Steps
1) Define your FastAPI healthcare endpoint
Start with a typed request/response model. In healthcare systems, this matters because your agent should not be passing around loose JSON blobs.
from fastapi import FastAPI
from pydantic import BaseModel, Field
app = FastAPI(title="Healthcare AI Gateway")
class PatientQuery(BaseModel):
patient_id: str = Field(..., description="Internal patient identifier")
question: str = Field(..., description="Clinical or administrative question")
class AgentResponse(BaseModel):
answer: str
source: str
@app.post("/healthcare/query", response_model=AgentResponse)
async def healthcare_query(payload: PatientQuery):
return AgentResponse(
answer=f"Received query for patient {payload.patient_id}",
source="fastapi"
)
This endpoint is the contract. Your LangChain agent will call it as a tool instead of reaching into your database directly.
2) Wrap the FastAPI endpoint as a LangChain tool
Use StructuredTool so the agent gets typed input and predictable output. This is better than free-form string tools for healthcare workflows.
import httpx
from pydantic import BaseModel, Field
from langchain_core.tools import StructuredTool
class HealthcareToolInput(BaseModel):
patient_id: str = Field(..., description="Internal patient identifier")
question: str = Field(..., description="Question to send to the healthcare API")
async def call_healthcare_api(patient_id: str, question: str) -> str:
async with httpx.AsyncClient(base_url="http://localhost:8000") as client:
response = await client.post(
"/healthcare/query",
json={"patient_id": patient_id, "question": question},
timeout=15.0,
)
response.raise_for_status()
data = response.json()
return data["answer"]
healthcare_tool = StructuredTool.from_function(
coroutine=call_healthcare_api,
name="healthcare_query",
description="Queries the healthcare FastAPI service for patient-specific answers",
)
The key pattern here is simple:
- •FastAPI owns the system boundary
- •LangChain owns orchestration
- •The tool is just an HTTP client wrapper
3) Build the LangChain agent around that tool
For production use, keep the prompt narrow. Tell the agent what it can and cannot do.
import os
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a healthcare operations assistant. Use tools when patient-specific data is needed. Do not invent medical facts."),
("human", "{input}"),
])
tools = [healthcare_tool]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
This gives you a tool-calling agent that can decide when to hit your FastAPI service.
4) Expose an agent endpoint in FastAPI
Now add a second endpoint that receives user input and runs the LangChain agent. This is usually where your chat UI or internal portal connects.
from fastapi import Body
class ChatRequest(BaseModel):
input: str
class ChatResponse(BaseModel):
output: str
@app.post("/agent/chat", response_model=ChatResponse)
async def chat(req: ChatRequest):
result = await executor.ainvoke({"input": req.input})
return ChatResponse(output=result["output"])
At this point:
- •
/healthcare/queryis your domain service - •
/agent/chatis your orchestration endpoint
That separation keeps your system maintainable when compliance rules change.
5) Run both services and keep boundaries explicit
If you are deploying this in one app, run Uvicorn normally:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
If you split them into separate services later, keep these rules:
- •FastAPI service handles auth, validation, audit logs, and PHI access control
- •LangChain service handles prompt logic and tool routing only
- •Never let the agent bypass your API layer to hit storage directly
Testing the Integration
Use a simple request against the agent endpoint and confirm it routes through the tool.
import requests
response = requests.post(
"http://localhost:8000/agent/chat",
json={"input": "Look up patient 12345 and summarize the current status."}
)
print(response.status_code)
print(response.json())
Expected output:
200
{
"output": "Received query for patient 12345"
}
If you want stronger verification:
- •log every incoming request in FastAPI
- •log every tool invocation in LangChain with
verbose=True - •assert that
/healthcare/querywas called before returning the final answer
Real-World Use Cases
- •
Clinical intake assistant
- •Collects symptoms from staff or patients
- •Calls FastAPI endpoints for demographics or encounter history
- •Returns a structured summary for triage teams
- •
Prior authorization helper
- •Reads request context from your API layer
- •Uses LangChain to draft payer-facing responses
- •Pulls supporting clinical facts through controlled endpoints
- •
Patient support workflow automation
- •Answers scheduling or medication refill questions from approved systems
- •Escalates anything ambiguous to human review
- •Keeps all PHI access behind audited API calls
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit