How to Integrate FastAPI for healthcare with LangChain for startups
Combining FastAPI for healthcare with LangChain gives you a clean way to expose clinical workflows as APIs while letting an LLM reason over patient context, summarize notes, triage requests, or draft follow-up actions. For startups, that means you can ship an AI agent that sits between your healthcare app, internal services, and a language model without turning your backend into a mess.
Prerequisites
- •Python 3.10+
- •A FastAPI app already set up
- •
uvicornfor local serving - •
langchainand a model provider package such aslangchain-openai - •
pydanticfor request/response validation - •Access to a healthcare data source or mock patient records
- •Environment variables configured:
- •
OPENAI_API_KEY - •any healthcare API credentials if you are calling external systems
- •
Install the core packages:
pip install fastapi uvicorn langchain langchain-openai pydantic
Integration Steps
- •Create the FastAPI healthcare endpoint
Start by exposing a health-related API endpoint with typed input. In real systems, this is where you validate patient data before sending it into an agent workflow.
from fastapi import FastAPI
from pydantic import BaseModel, Field
app = FastAPI(title="Healthcare AI API")
class PatientRequest(BaseModel):
patient_id: str = Field(..., examples=["pt_10021"])
symptoms: str = Field(..., examples=["fever, cough, fatigue"])
age: int = Field(..., ge=0)
@app.post("/triage")
async def triage_patient(payload: PatientRequest):
return {
"patient_id": payload.patient_id,
"status": "received",
"symptoms": payload.symptoms,
"age": payload.age,
}
This gives you a stable contract for downstream LangChain calls.
- •Build a LangChain prompt for clinical summarization
Use LangChain’s prompt and model APIs to turn raw symptoms into structured triage guidance. Keep the prompt narrow so the output stays predictable.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a healthcare intake assistant. Do not diagnose. Provide triage-style guidance."),
("user", "Patient ID: {patient_id}\nAge: {age}\nSymptoms: {symptoms}\nReturn a short summary and next-step recommendation.")
])
chain = prompt | llm
This uses standard LangChain composition with the pipe operator, which is the simplest production-friendly pattern for chaining prompts and models.
- •Call the LangChain chain from your FastAPI route
Now connect the API layer to the LLM workflow. The route receives validated input, invokes the chain, and returns structured output.
from fastapi import FastAPI
from pydantic import BaseModel
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
app = FastAPI()
class PatientRequest(BaseModel):
patient_id: str
symptoms: str
age: int
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a healthcare intake assistant. Do not diagnose."),
("user", "Patient ID: {patient_id}\nAge: {age}\nSymptoms: {symptoms}")
])
chain = prompt | llm
@app.post("/triage")
async def triage_patient(payload: PatientRequest):
result = await chain.ainvoke({
"patient_id": payload.patient_id,
"age": payload.age,
"symptoms": payload.symptoms,
})
return {
"patient_id": payload.patient_id,
"triage_summary": result.content,
}
For startup systems, this is usually enough to get an MVP working without introducing extra orchestration layers too early.
- •Add a tool for internal healthcare lookup
If your agent needs patient history from an internal service, wrap that service as a LangChain tool and call it from your app logic.
import requests
from langchain_core.tools import tool
@tool
def fetch_patient_history(patient_id: str) -> str:
"""Fetch patient history from internal healthcare service."""
resp = requests.get(f"https://internal-api.example.com/patients/{patient_id}/history", timeout=10)
resp.raise_for_status()
data = resp.json()
return f"Last visit: {data['last_visit']}, Notes: {data['notes']}"
Then include it in the route before invoking the model:
@app.post("/triage-with-history")
async def triage_with_history(payload: PatientRequest):
history = fetch_patient_history.invoke(payload.patient_id)
result = await chain.ainvoke({
"patient_id": payload.patient_id,
"age": payload.age,
"symptoms": f"{payload.symptoms}\nHistory: {history}",
})
return {
"patient_id": payload.patient_id,
"history": history,
"triage_summary": result.content,
}
That pattern keeps business data retrieval outside the model while still giving the model enough context to respond well.
- •Run the app and wire in production basics
Serve it with Uvicorn and keep secrets out of code. Add request logging, timeouts, and basic rate limits before exposing it publicly.
uvicorn main:app --reload --port 8000
For production deployments:
- •Put
OPENAI_API_KEYin your secret manager - •Add auth on every endpoint
- •Log request IDs and model latency
- •Set strict timeouts on external API calls
- •Store only minimum necessary PHI in prompts
Testing the Integration
Hit the endpoint with a sample request using curl or httpx. This verifies both FastAPI routing and LangChain invocation.
import httpx
payload = {
"patient_id": "pt_10021",
"symptoms": "persistent cough and mild fever",
"age": 34,
}
response = httpx.post("http://127.0.0.1:8000/triage", json=payload, timeout=30)
print(response.status_code)
print(response.json())
Expected output:
{
"patient_id": "pt_10021",
"triage_summary": "..."
}
If you wired in history lookup too, you should see both the retrieved context and the generated summary in the response body.
Real-World Use Cases
- •
Patient intake assistant
Collect symptoms through FastAPI, summarize them with LangChain, then route urgent cases to human staff. - •
Clinical note drafting
Take structured encounter data from your backend API and generate draft visit summaries for review. - •
Care coordination agent
Pull appointment history or medication data from internal services and have LangChain generate follow-up tasks or reminders.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit