LlamaIndex Tutorial (TypeScript): building prompt templates for beginners

By Cyprian AaronsUpdated 2026-04-21
llamaindexbuilding-prompt-templates-for-beginnerstypescript

This tutorial shows you how to build and use prompt templates in LlamaIndex with TypeScript. You need this when you want consistent LLM outputs, reusable prompts across workflows, and fewer brittle string-concatenation bugs.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project already set up
  • LlamaIndex packages:
    • llamaindex
    • typescript
    • ts-node or a build setup like tsx
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with:
    • PromptTemplate
    • Settings
    • calling an LLM through LlamaIndex

Step-by-Step

  1. Install the dependencies and set your API key.

    Keep this simple: one package for LlamaIndex, one for running TypeScript locally, and your OpenAI key in the environment.

npm install llamaindex
npm install -D typescript tsx @types/node
export OPENAI_API_KEY="your-openai-api-key"
  1. Create a typed prompt template for a beginner-friendly explanation.

    In LlamaIndex, prompt templates are just structured strings with variables. The main win is that you can keep the wording stable while swapping inputs safely.

import { PromptTemplate } from "llamaindex";

const beginnerPrompt = new PromptTemplate({
  template: `
You are a helpful tutor.
Explain "{topic}" to a beginner in {tone} tone.

Rules:
- Use simple language
- Give one real-world example
- Keep it under {maxWords} words

Answer:
`.trim(),
});

const formatted = beginnerPrompt.format({
  topic: "prompt templates",
  tone: "clear",
  maxWords: "120",
});

console.log(formatted);
  1. Add a reusable helper so your templates stay clean.

    In production, you do not want every caller formatting raw strings by hand. Wrap the template in a small function so your app always passes the same shape of data.

import { PromptTemplate } from "llamaindex";

const explanationTemplate = new PromptTemplate({
  template: `
Explain "{subject}" for someone who knows basic TypeScript but is new to AI.

Include:
1. What it is
2. Why it matters
3. A small example

Write in {style} style.
`.trim(),
});

type ExplanationInput = {
  subject: string;
  style: "simple" | "technical";
};

export function buildExplanationPrompt(input: ExplanationInput): string {
  return explanationTemplate.format({
    subject: input.subject,
    style: input.style,
  });
}
  1. Send the formatted prompt to an LLM through LlamaIndex.

    This is where the template becomes useful. You format once, then pass the result into the model call without mixing prompt logic into business logic.

import { OpenAI } from "llamaindex";
import { buildExplanationPrompt } from "./prompt";

async function main() {
  const llm = new OpenAI({ model: "gpt-4o-mini" });

  const prompt = buildExplanationPrompt({
    subject: "prompt templates in LlamaIndex",
    style: "simple",
  });

  const response = await llm.complete(prompt);
  console.log(response.text);
}

main().catch(console.error);
  1. Build a few variations instead of one giant prompt.

    Beginners usually start with one template and then cram every use case into it. A better pattern is multiple focused templates for summarization, extraction, or explanation.

import { PromptTemplate } from "llamaindex";

export const summaryTemplate = new PromptTemplate({
  template: `
Summarize the following text for a beginner:

Text:
{content}

Summary:
`.trim(),
});

export const extractionTemplate = new PromptTemplate({
  template: `
Extract all action items from this text.
Return them as bullet points only.

Text:
{content}

Action items:
`.trim(),
});

const summaryPrompt = summaryTemplate.format({
  content: "LlamaIndex helps developers connect data sources to LLMs.",
});

console.log(summaryPrompt);

Testing It

Run the script that prints the formatted prompt first. You should see the placeholders replaced cleanly, with no leftover {topic} or {maxWords} tokens. Then run the LLM call and check that the output follows your instructions: beginner-friendly language, one example, and short length.

If the model ignores your rules, tighten the template by making constraints more explicit and moving them higher in the prompt. For debugging, log both the formatted prompt and the final response so you can see whether the problem is your template or the model behavior.

A quick sanity check is to swap inputs like "prompt templates", "vector search", and "RAG" into the same helper function. If formatting stays stable across all three, your template layer is doing its job.

Next Steps

  • Learn ChatPromptTemplate for multi-message prompts with system/user roles.
  • Add output parsing so your prompts return structured JSON instead of plain text.
  • Combine prompt templates with retrieval chains so prompts can include live context from documents.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides