LangChain Tutorial (TypeScript): deploying with Docker for advanced developers
This tutorial shows you how to package a LangChain TypeScript app into a Docker image, run it locally, and deploy it with a predictable runtime. You need this when your agent works on your laptop but you want the same behavior in CI, staging, or production without “works on my machine” drift.
What You'll Need
- •Node.js 20+
- •Docker Desktop or Docker Engine
- •An OpenAI API key
- •A basic TypeScript project with
npm - •These packages:
- •
langchain - •
@langchain/openai - •
dotenv - •
typescript - •
tsxortscfor local execution/builds
- •
Step-by-Step
- •Set up the project and install dependencies.
Keep the app small and explicit: one chain, one entrypoint, one container. That makes debugging in Docker much easier than trying to containerize a full agent framework on day one.
mkdir langchain-docker-ts
cd langchain-docker-ts
npm init -y
npm install langchain @langchain/openai dotenv
npm install -D typescript tsx @types/node
npx tsc --init
- •Add a minimal LangChain script that reads configuration from environment variables.
This example uses the modern LangChain split packages and a prompt chain that is easy to test in containers.
// src/index.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a concise assistant for bank operations."],
["human", "{question}"],
]);
const chain = prompt.pipe(model).pipe(new StringOutputParser());
async function main() {
const question = process.env.QUESTION ?? "What is an ACH transfer?";
const answer = await chain.invoke({ question });
console.log(answer);
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
- •Make the TypeScript build predictable for Docker.
Compile todist/so the container runs plain Node.js instead of relying on ts-node behavior inside the image. That keeps runtime images smaller and avoids surprises from dev-only tooling.
{
"name": "langchain-docker-ts",
"version": "1.0.0",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/index.js",
"dev": "tsx src/index.ts"
},
"dependencies": {
"@langchain/openai": "^0.6.0",
"dotenv": "^16.4.5",
"langchain": "^0.3.0"
},
"devDependencies": {
"@types/node": "^22.0.0",
"tsx": "^4.19.0",
"typescript": "^5.6.0"
}
}
- •Create a multi-stage Dockerfile for production builds.
The first stage installs dependencies and compiles TypeScript; the second stage copies only what is needed to run. This is the pattern you want for internal tools, APIs, and agents that will be deployed repeatedly.
# syntax=docker/dockerfile:1
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY tsconfig.json ./
COPY src ./src
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]
- •Add environment configuration and build the image locally.
Use.envfor local development, but pass secrets through your deployment platform in production. The container should only needOPENAI_API_KEYand optionallyQUESTION.
OPENAI_API_KEY=your_openai_api_key_here
QUESTION=Summarize the difference between deposits and withdrawals.
docker build -t langchain-docker-ts .
docker run --rm --env-file .env langchain-docker-ts
Testing It
Run the app outside Docker first with npm run dev so you can catch TypeScript or LangChain errors before introducing container layers. Then run docker build and verify the image compiles cleanly without depending on local node_modules.
Inside the container, check that the model responds to your QUESTION variable and that no secrets are baked into the image layers. If you want to validate repeatability, rebuild after deleting local dependencies and confirm the container still runs.
For deployment, use the same image tag across environments and inject secrets at runtime through your orchestrator or secret manager.
Next Steps
- •Add structured output with Zod schemas so your agent returns typed JSON instead of plain text.
- •Replace the single prompt chain with tool calling using LangChain tools and OpenAI function calling.
- •Add health checks and logging middleware before deploying behind an API gateway or job runner
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit