LangChain Tutorial (Python): deploying with Docker for beginners

By Cyprian AaronsUpdated 2026-04-21
langchaindeploying-with-docker-for-beginnerspython

This tutorial shows you how to package a small LangChain Python app into a Docker image and run it locally with one command. You need this when you want a repeatable deployment target for development, demos, or moving the same app into a container platform later.

What You'll Need

  • Python 3.11+
  • Docker Desktop or Docker Engine installed
  • An OpenAI API key
  • A working pip environment
  • Basic familiarity with LangChain chains and prompts
  • These Python packages:
    • langchain
    • langchain-openai
    • python-dotenv

Step-by-Step

  1. Create a minimal project structure. Keep the app small so you can verify Docker first, then expand it later.
mkdir langchain-docker-demo
cd langchain-docker-demo
touch app.py requirements.txt .env .dockerignore Dockerfile
  1. Add your dependencies and environment variables. The app will read the API key from .env, which keeps secrets out of your code.
# requirements.txt
langchain>=0.2.0
langchain-openai>=0.1.0
python-dotenv>=1.0.0
# .env
OPENAI_API_KEY=your_openai_api_key_here
  1. Write the LangChain app. This example uses a prompt template, an OpenAI chat model, and a simple string output parser.
# app.py
from dotenv import load_dotenv
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

load_dotenv()

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a concise assistant."),
    ("user", "Explain {topic} in one paragraph.")
])

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = prompt | llm | StrOutputParser()

if __name__ == "__main__":
    result = chain.invoke({"topic": "Dockerizing a Python LangChain app"})
    print(result)
  1. Add Docker support. This image installs dependencies, copies the app, and runs it with Python inside the container.
FROM python:3.11-slim

WORKDIR /app

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python", "app.py"]
  1. Ignore local files you do not want in the image build context. This keeps your image smaller and avoids leaking secrets.
# .dockerignore
__pycache__/
*.pyc
.env
.git/
venv/
.envrc
.DS_Store
  1. Build and run the container. Docker will create an isolated runtime that behaves the same on your laptop and on another machine with Docker installed.
docker build -t langchain-docker-demo .
docker run --rm --env-file .env langchain-docker-demo

If you want to pass the key directly instead of using .env, use -e:

docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  langchain-docker-demo

Testing It

Run the container and confirm it prints a response from the model instead of raising an authentication or import error. If you see ModuleNotFoundError, your requirements.txt or build step is wrong.

If you get an OpenAI auth error, check that .env contains a valid key and that you passed it into docker run. If the container exits immediately after printing output, that is expected for this demo because it runs once and finishes.

For a quick sanity check, change the topic in app.py and rebuild the image:

docker build -t langchain-docker-demo .
docker run --rm --env-file .env langchain-docker-demo

You should see different text based on the new prompt input.

Next Steps

  • Add an HTTP API with FastAPI so your LangChain app can serve requests instead of running once at startup.
  • Split configuration into typed settings with Pydantic so Docker deployments stay predictable across environments.
  • Move from docker run to docker compose when you add Redis, Postgres, or other services around the agent.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides