LangGraph Tutorial (TypeScript): deploying with Docker for beginners

By Cyprian AaronsUpdated 2026-04-21
langgraphdeploying-with-docker-for-beginnerstypescript

This tutorial shows you how to take a basic LangGraph TypeScript app and package it into a Docker image you can run locally or deploy anywhere that supports containers. You need this when your graph works on your machine but you want a repeatable runtime, predictable dependencies, and an easy path to staging or production.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • A TypeScript LangGraph project
  • An OpenAI API key
  • npm or pnpm
  • These packages installed in your project:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • typescript
    • tsx

Step-by-Step

  1. Create a minimal LangGraph app in TypeScript. This example uses a single node that calls an LLM and returns the result as state, which is enough to prove your container works end to end.
import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { Annotation } from "@langchain/langgraph";

const State = Annotation.Root({
  question: Annotation<string>(),
  answer: Annotation<string>(),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

async function answerNode(state: typeof State.State) {
  const response = await llm.invoke(`Answer briefly: ${state.question}`);
  return { answer: response.content.toString() };
}

const graph = new StateGraph(State)
  .addNode("answer", answerNode)
  .addEdge(START, "answer")
  .addEdge("answer", END)
  .compile();

const result = await graph.invoke({ question: "What is LangGraph?" });
console.log(result);
  1. Add a small entry file and TypeScript config so Docker can run the app with predictable output. Keep the build simple; beginners should not need a bundler for this.
{
  "name": "langgraph-docker-demo",
  "type": "module",
  "scripts": {
    "dev": "tsx src/index.ts",
    "build": "tsc",
    "start": "node dist/index.js"
  },
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/langgraph": "^0.2.0",
    "@langchain/openai": "^0.5.0"
  },
  "devDependencies": {
    "@types/node": "^22.0.0",
    "tsx": "^4.0.0",
    "typescript": "^5.0.0"
  }
}
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "skipLibCheck": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*.ts"]
}
  1. Put the code in src/index.ts and make sure it reads the API key from the environment, not hardcoded in source control. That is the part people usually get wrong before moving to Docker.
import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { Annotation } from "@langchain/langgraph";

const State = Annotation.Root({
  question: Annotation<string>(),
  answer: Annotation<string>(),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

async function answerNode(state: typeof State.State) {
  const response = await llm.invoke(`Answer briefly: ${state.question}`);
  return { answer: response.content.toString() };
}

const graph = new StateGraph(State)
  .addNode("answer", answerNode)
  .addEdge(START, "answer")
  .addEdge("answer", END)
  .compile();

const result = await graph.invoke({ question: "What is LangGraph?" });
console.log(JSON.stringify(result, null, 2));
  1. Add a Dockerfile that installs dependencies, builds TypeScript, and runs the compiled JavaScript. This is the standard pattern for small Node services because it keeps runtime images cleaner than shipping your entire source tree.
FROM node:20-slim AS builder

WORKDIR /app

COPY package*.json tsconfig.json ./
COPY src ./src

RUN npm install
RUN npm run build

FROM node:20-slim

WORKDIR /app

COPY package*.json ./
RUN npm install --omit=dev

COPY --from=builder /app/dist ./dist

CMD ["node", "dist/index.js"]
  1. Build and run the container with your API key injected at runtime. Keep secrets out of the image; pass them in as environment variables when you start the container.
docker build -t langgraph-docker-demo .

docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  langgraph-docker-demo
  1. If you want a cleaner local workflow, add a .dockerignore so Docker does not copy unnecessary files into the build context. This reduces build time and avoids accidentally baking local artifacts into images.
node_modules
dist
.git
.env
Dockerfile
npm-debug.log

Testing It

Run the container and confirm it prints a JSON object containing both question and answer. If you see an authentication error, check that OPENAI_API_KEY is available in your shell before running docker run.

If the container builds but exits immediately with a module error, your package.json "type" field or TypeScript module settings are probably mismatched. For this setup, keep both NodeNext and ESM aligned exactly as shown.

For deployment, test the same image on another machine or in CI so you know it does not depend on local files outside the image. That is the real value of Docker here: same artifact, same behavior.

Next Steps

  • Add more nodes and conditional edges so your graph handles routing instead of one-shot prompts.
  • Mount persistent storage or connect LangGraph to external state if you need resumable workflows.
  • Replace the direct OpenAI call with structured tool execution once you are ready for agent behavior beyond simple generation

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides