LlamaIndex Tutorial (TypeScript): deploying with Docker for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
llamaindexdeploying-with-docker-for-intermediate-developerstypescript

This tutorial shows you how to package a TypeScript LlamaIndex app into a Docker image and run it the same way in local dev, staging, or a server. You need this when your agent works on your laptop but you want a repeatable runtime with pinned dependencies, environment variables, and no “works on my machine” drift.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • An OpenAI API key
  • A basic TypeScript project using llamaindex
  • npm or pnpm
  • A .env file for secrets
  • A source document to index, or a simple prompt-only setup

Step-by-Step

  1. Start with a minimal TypeScript project and install the packages you actually need. For this example, we’ll keep it small: LlamaIndex, dotenv for env loading, and TypeScript tooling.
mkdir llamaindex-docker-ts
cd llamaindex-docker-ts
npm init -y
npm install llamaindex dotenv
npm install -D typescript tsx @types/node
npx tsc --init
  1. Add a simple LlamaIndex script that reads your API key from the environment and runs a query engine against in-memory documents. This gives you a real executable entrypoint that Docker can run without extra build steps.
// src/index.ts
import "dotenv/config";
import { Document, VectorStoreIndex } from "llamaindex";

async function main() {
  const docs = [
    new Document({ text: "LlamaIndex helps connect LLMs to external data." }),
    new Document({ text: "Docker makes deployments reproducible across environments." }),
  ];

  const index = await VectorStoreIndex.fromDocuments(docs);
  const queryEngine = index.asQueryEngine();

  const response = await queryEngine.query({
    query: "Why use Docker with LlamaIndex?",
  });

  console.log(String(response));
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. Add scripts so you can run locally before containerizing. Keep the local path identical to the container path; that makes debugging much easier when something fails in production.
{
  "name": "llamaindex-docker-ts",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "dev": "tsx src/index.ts",
    "start": "node dist/index.js",
    "build": "tsc"
  },
  "dependencies": {
    "dotenv": "^16.4.5",
    "llamaindex": "^0.6.0"
  },
  "devDependencies": {
    "@types/node": "^22.0.0",
    "tsx": "^4.16.2",
    "typescript": "^5.5.4"
  }
}
  1. Create a Dockerfile that installs dependencies, builds TypeScript, and runs the compiled output. This is the production pattern: build once, run the artifact, and keep the runtime image smaller than the build image.
FROM node:20-alpine AS build

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY tsconfig.json ./
COPY src ./src
RUN npm run build

FROM node:20-alpine

WORKDIR /app

ENV NODE_ENV=production

COPY package*.json ./
RUN npm install --omit=dev

COPY --from=build /app/dist ./dist

CMD ["node", "dist/index.js"]
  1. Add a .dockerignore file so your image doesn’t carry local junk into the build context. If you skip this, your builds get slower and less predictable.
node_modules
dist
.git
.env
npm-debug.log
Dockerfile
README.md
  1. Build and run the container with your API key injected at runtime. Do not bake secrets into the image; pass them as environment variables so the same image can move between environments safely.
docker build -t llamaindex-ts-app .

docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  llamaindex-ts-app

Testing It

Run npm run dev first and confirm you get a response from the query engine before touching Docker. That isolates code issues from container issues.

Then build the image and run it with docker run. If it fails inside Docker but works locally, check three things first: your .dockerignore, whether src was copied correctly, and whether OPENAI_API_KEY is actually present in the container environment.

If you want stronger verification, shell into the container and inspect /app/dist/index.js. That tells you whether the TypeScript build completed and whether the runtime image contains only what it needs.

Next Steps

  • Replace in-memory documents with file loading using SimpleDirectoryReader
  • Add an HTTP layer with Express or Fastify so Docker runs an API instead of a one-off script
  • Move secrets and model settings into environment-based config for staging and production

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides