LangChain Tutorial (Python): adding authentication for beginners
This tutorial shows you how to add authentication to a LangChain-based Python app so only valid users can reach your chain or agent. You need this when your LangChain app is exposed through an API, internal tool, or chat UI and you want to block anonymous access before any model call happens.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •
fastapi - •
uvicorn - •
python-dotenv - •An OpenAI API key
- •A simple bearer token for local auth testing
Install the packages:
pip install langchain langchain-openai fastapi uvicorn python-dotenv
Step-by-Step
- •Start with a minimal LangChain chain that answers questions. Keep the chain separate from your web layer so authentication stays outside the LLM logic.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{question}")
])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = prompt | llm | StrOutputParser()
print(chain.invoke({"question": "What is authentication?"}))
- •Add a FastAPI app with a bearer-token dependency. This is the actual auth gate: requests without the right token never reach your LangChain chain.
import os
from fastapi import FastAPI, Depends, HTTPException, status
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from pydantic import BaseModel
app = FastAPI()
security = HTTPBearer()
API_TOKEN = os.getenv("API_TOKEN", "dev-secret-token")
class QueryRequest(BaseModel):
question: str
def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)):
if credentials.credentials != API_TOKEN:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid or missing token",
)
return True
- •Wire the authenticated endpoint to your LangChain chain. The endpoint accepts a question, checks auth first, then sends the request into the chain only if the token is valid.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{question}")
])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = prompt | llm | StrOutputParser()
@app.post("/ask")
def ask(req: QueryRequest, _: bool = Depends(verify_token)):
answer = chain.invoke({"question": req.question})
return {"answer": answer}
- •Put your secrets in environment variables and run the server. This keeps tokens out of source control and makes local testing predictable.
export OPENAI_API_KEY="your-openai-api-key"
export API_TOKEN="dev-secret-token"
uvicorn app:app --reload --port 8000
- •Call the endpoint with and without the token to confirm auth is working. Use
curlfirst so you can see exactly what happens at the HTTP layer before adding any frontend.
curl -X POST "http://127.0.0.1:8000/ask" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer dev-secret-token" \
-d '{"question":"Explain bearer auth in one sentence."}'
curl -X POST "http://127.0.0.1:8000/ask" \
-H "Content-Type: application/json" \
-d '{"question":"This should fail."}'
Testing It
First, send a request with the correct Authorization: Bearer ... header and confirm you get a model response back. Then send the same request without the header or with a wrong token and confirm FastAPI returns 401 Unauthorized.
If you want stronger proof, add a print statement inside verify_token and verify it runs before chain.invoke(). That tells you the auth check happens before any OpenAI call is made.
For production-style testing, rotate the token through environment variables and make sure your deployment platform injects it correctly at runtime.
Next Steps
- •Replace the static bearer token with JWT validation using
python-joseorPyJWT - •Move auth into middleware if you need to protect multiple endpoints consistently
- •Add per-user rate limiting and audit logging before each LangChain invocation
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit