LangGraph Tutorial (TypeScript): deploying with Docker for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langgraphdeploying-with-docker-for-advanced-developerstypescript

This tutorial shows how to package a LangGraph TypeScript app into a Docker image, run it locally, and keep it deployable for production. You need this when your agent is more than a notebook experiment: once you want repeatable builds, environment isolation, and a container you can ship to Kubernetes, ECS, or any container platform.

What You'll Need

  • Node.js 20+
  • Docker Engine 24+
  • A TypeScript project with:
    • langgraph
    • @langchain/openai
    • @langchain/core
    • typescript
    • tsx
  • An OpenAI API key
  • Basic familiarity with:
    • LangGraph state graphs
    • async/await
    • Dockerfile syntax
  • A .env file for local development

Step-by-Step

  1. Create the project and install dependencies.

    Use a minimal setup so the container mirrors local execution. The important part is that your runtime dependencies are explicit and your build output goes to dist/.

    mkdir langgraph-docker-demo
    cd langgraph-docker-demo
    npm init -y
    npm install langgraph @langchain/openai @langchain/core dotenv
    npm install -D typescript tsx @types/node
    
  2. Add TypeScript config and an environment file.

    Keep the compiler strict enough to catch graph-state mistakes before they hit production. The .env file stays outside the image so secrets are injected at runtime.

    {
      "compilerOptions": {
        "target": "ES2022",
        "module": "NodeNext",
        "moduleResolution": "NodeNext",
        "outDir": "dist",
        "rootDir": "src",
        "strict": true,
        "esModuleInterop": true,
        "skipLibCheck": true,
        "resolveJsonModule": true
      },
      "include": ["src/**/*.ts"]
    }
    
    OPENAI_API_KEY=your_openai_key_here
    
  3. Build a real LangGraph app in TypeScript.

    This graph keeps state as messages, calls GPT-4o-mini, and returns the final assistant response. Notice the imports are from the actual LangGraph and LangChain packages used in TypeScript projects.

    // src/index.ts
    import "dotenv/config";
    import { ChatOpenAI } from "@langchain/openai";
    import { AIMessage } from "@langchain/core/messages";
    import { StateGraph, START, END, Annotation } from "langgraph";
    
    const State = Annotation.Root({
      messages: Annotation<any[]>({
        reducer: (left, right) => left.concat(right),
        default: () => [],
      }),
    });
    
    const model = new ChatOpenAI({
      model: "gpt-4o-mini",
      temperature: 0,
    });
    
    async function assistantNode(state: typeof State.State) {
      const response = await model.invoke(state.messages);
      return { messages: [response] };
    }
    
    const graph = new StateGraph(State)
      .addNode("assistant", assistantNode)
      .addEdge(START, "assistant")
      .addEdge("assistant", END)
      .compile();
    
    const result = await graph.invoke({
      messages: [{ role: "user", content: "Write one sentence about Dockerizing LangGraph." }],
    });
    
    const lastMessage = result.messages[result.messages.length - 1];
    console.log(lastMessage instanceof AIMessage ? lastMessage.content : lastMessage);
    
  4. Add build and run scripts.

    Your container should run compiled JavaScript, not TypeScript source. That keeps startup predictable and avoids shipping dev tooling into production.

    {
      "name": "langgraph-docker-demo",
      "version": "1.0.0",
      "type": "module",
      "scripts": {
        "dev": "tsx src/index.ts",
        "build": "tsc -p tsconfig.json",
        "start": "node dist/index.js"
      }
    }
    
  5. Containerize the app with a multi-stage Dockerfile.

    This pattern gives you smaller images and cleaner dependency separation. First stage installs dev dependencies and builds; second stage runs only production code.

    FROM node:20-alpine AS builder
    
    WORKDIR /app
    COPY package*.json tsconfig.json ./
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
     
     RUN npm install
     COPY src ./src
     RUN npm run build
    
     FROM node:20-alpine AS runner
    
     WORKDIR /app
     ENV NODE_ENV=production
    
     COPY package*.json ./
     RUN npm install --omit=dev
    
     COPY --from=builder /app/dist ./dist
    
     CMD ["node", "dist/index.js"]
    
  6. Build and run the container with your API key injected at runtime.

    Do not bake secrets into the image. Pass them in when you start the container so the same artifact can move across environments unchanged.

     docker build -t langgraph-docker-demo .
     docker run --rm \
       --env-file .env \
       langgraph-docker-demo
    

Testing It

Run npm run dev first to confirm the graph works outside Docker before debugging containers. If that passes but Docker fails, your problem is usually build context, module resolution, or missing environment variables.

Then run docker build and check that the image compiles cleanly without TypeScript errors. Finally, execute the container and verify you get a single assistant response printed to stdout.

If you want a stricter check, inspect the image size with docker images and confirm only production dependencies are present in the final stage. That matters once you start deploying multiple agent services and care about startup time and attack surface.

Next Steps

  • Add tool calling to the graph and persist conversation state with a checkpointer.
  • Replace stdout output with an HTTP API using Fastify or Express inside the same Docker pattern.
  • Add CI steps for npm run build, docker build, and vulnerability scanning before deployment

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides