LangChain Tutorial (Python): deploying with Docker for intermediate developers
This tutorial shows you how to package a LangChain Python app into a Docker image, run it locally, and ship it in a way that behaves the same on your laptop and in production. You need this when your agent works locally but breaks in CI, on another machine, or inside a container because of missing dependencies, environment variables, or runtime assumptions.
What You'll Need
- •Python 3.11+
- •Docker Desktop or Docker Engine installed
- •An OpenAI API key exported as
OPENAI_API_KEY - •
pipandvenv - •Basic familiarity with LangChain
ChatOpenAI, prompts, and LCEL - •A project folder with write access
Step-by-Step
- •Create a minimal LangChain app first. Keep it small and deterministic so you can validate the container later without debugging application logic at the same time.
# app.py
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
def main() -> None:
api_key = os.environ["OPENAI_API_KEY"]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise assistant."),
("user", "Write one sentence explaining Docker for Python developers.")
])
llm = ChatOpenAI(model="gpt-4o-mini", api_key=api_key)
chain = prompt | llm
result = chain.invoke({})
print(result.content)
if __name__ == "__main__":
main()
- •Add the dependencies you actually need. For Docker deployments, pinning packages matters because you want repeatable builds, not whatever happens to be latest when the image is built.
# requirements.txt
langchain-core==0.3.34
langchain-openai==0.3.6
openai==1.61.1
python-dotenv==1.0.1
- •Build a container image with a slim Python base image. Copy only what you need, install dependencies first for better layer caching, then copy the application code.
# Dockerfile
FROM python:3.11-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
CMD ["python", "app.py"]
- •Use environment variables instead of hardcoding secrets into the image. That keeps credentials out of your Docker layers and makes the same image work across environments.
export OPENAI_API_KEY="your-key-here"
docker build -t langchain-docker-demo .
docker run --rm \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
langchain-docker-demo
- •If you want local development parity, add a
.envfile and load it outside the container during development only. In production, inject the variable from your runtime platform or secret manager.
# .env
OPENAI_API_KEY=your-key-here
# app.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
load_dotenv()
def main() -> None:
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise assistant."),
("user", "Write one sentence explaining Docker for Python developers.")
])
llm = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | llm
result = chain.invoke({})
print(result.content)
if __name__ == "__main__":
main()
- •Make the image easier to debug by passing through logs and keeping startup simple. Once this works, you can add an API server like FastAPI, but don’t start there if your goal is deployment confidence.
docker run --rm -it \
--env-file .env \
langchain-docker-demo
Testing It
Run the container once with docker run and confirm it prints a valid model response instead of stack traces or empty output. If it fails, check whether OPENAI_API_KEY is present inside the container and whether your dependency versions match what you tested locally.
Next, rebuild after changing app.py to make sure Docker layer caching is working as expected and your code changes are actually being copied into the image. If you see stale behavior, verify that you are not mounting an old volume over /app.
For a more realistic test, run the same image on another machine or in CI using only environment injection, not local files outside the build context. That tells you whether your deployment artifact is truly portable.
Next Steps
- •Add a FastAPI wrapper so your LangChain chain is exposed as an HTTP service.
- •Move secrets to AWS Secrets Manager, Azure Key Vault, or Kubernetes secrets.
- •Add health checks and structured logging before deploying to ECS, Cloud Run, or Kubernetes
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit