LangChain Tutorial (TypeScript): deploying with Docker for beginners

By Cyprian AaronsUpdated 2026-04-21
langchaindeploying-with-docker-for-beginnerstypescript

This tutorial shows you how to package a small LangChain TypeScript app into a Docker image and run it locally the same way you would in a containerized deployment. You need this when you want your agent or chain to behave consistently across laptops, CI, and production servers without depending on local Node.js setup.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • An OpenAI API key
  • A basic TypeScript project with npm
  • These packages:
    • langchain
    • @langchain/openai
    • typescript
    • tsx
    • dotenv

Step-by-Step

  1. Create a new project and install the dependencies. This gives you a clean TypeScript app with LangChain and a runtime that can execute .ts files directly during development.
mkdir langchain-docker-demo
cd langchain-docker-demo
npm init -y
npm install langchain @langchain/openai dotenv
npm install -D typescript tsx @types/node
  1. Add your TypeScript config and package scripts. The important part is making sure the app runs in ESM mode, which matches how LangChain packages are published.
{
  "name": "langchain-docker-demo",
  "type": "module",
  "scripts": {
    "dev": "tsx src/index.ts",
    "start": "node dist/index.js",
    "build": "tsc"
  },
  "dependencies": {
    "@langchain/openai": "^0.6.0",
    "dotenv": "^16.4.5",
    "langchain": "^0.3.0"
  },
  "devDependencies": {
    "@types/node": "^22.0.0",
    "tsx": "^4.19.0",
    "typescript": "^5.6.0"
  }
}
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "resolveJsonModule": true
  },
  "include": ["src"]
}
  1. Create the LangChain app itself. This example uses a prompt template and an OpenAI chat model, then prints the response to stdout so Docker logs can capture it.
import 'dotenv/config';
import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';

async function main() {
  const model = new ChatOpenAI({
    model: 'gpt-4o-mini',
    temperature: 0,
  });

  const prompt = PromptTemplate.fromTemplate(
    'Write one concise sentence explaining Docker for a TypeScript developer: {topic}'
  );

  const chain = prompt.pipe(model);

  const response = await chain.invoke({ topic: 'deploying LangChain apps' });
  console.log(response.content);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. Test it locally before touching Docker. This catches bad imports, missing API keys, and version issues while feedback is still fast.
export OPENAI_API_KEY="your-key-here"
npm run dev
  1. Add a production Dockerfile that builds the TypeScript app and runs the compiled output. Keep the image small by installing only production dependencies in the final stage.
FROM node:20-alpine AS builder

WORKDIR /app

COPY package*.json tsconfig.json ./
RUN npm install

COPY src ./src
RUN npm run build

FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install --omit=dev

COPY --from=builder /app/dist ./dist

ENV NODE_ENV=production
CMD ["node", "dist/index.js"]
  1. Build and run the container with your API key passed at runtime. Passing secrets as environment variables is fine for local testing; in real deployments you would use your platform’s secret manager.
docker build -t langchain-docker-demo .
docker run --rm \
  -e OPENAI_API_KEY="your-key-here" \
  langchain-docker-demo

Testing It

If everything is working, the container should print one sentence generated by the model and then exit cleanly. If you get an authentication error, check that OPENAI_API_KEY is available inside the container and that the key is valid.

If the build fails, look at your TypeScript module settings first; LangChain’s current packages expect ESM-style imports. Also verify that src/index.ts exists and that npm run build succeeds outside Docker before assuming the image is broken.

A good sanity check is to rebuild after changing only the prompt text, then rerun the container and confirm that the output changes accordingly. That tells you both the app layer and container layer are behaving predictably.

Next Steps

  • Add an HTTP server with Fastify or Express so your chain can serve requests instead of just printing to stdout.
  • Move from a single prompt to LCEL pipelines with retries, structured outputs, and tool calling.
  • Add Docker Compose for local development if your agent needs Redis, Postgres, or another backing service.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides