LlamaIndex Tutorial (TypeScript): building prompt templates for intermediate developers
This tutorial shows how to build reusable prompt templates in LlamaIndex TypeScript and wire them into a working query pipeline. You need this when your prompts are no longer one-off strings and you want consistent formatting, safer variable injection, and easier iteration across multiple LLM calls.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeortsx - •
llamaindexinstalled - •An OpenAI API key exported as
OPENAI_API_KEY - •A small text file or document source for testing
- •Basic familiarity with async/await and TypeScript generics
Step-by-Step
- •Install the package and set up your environment.
LlamaIndex TypeScript ships the core abstractions you need: LLMs, prompt templates, and query engines. Keep this in a clean project so you can see exactly how the template flows through the call chain.
npm init -y
npm install llamaindex
npm install -D typescript tsx @types/node
- •Create a typed prompt template for your use case.
The key idea is to keep placeholders explicit so your application controls every variable that reaches the model. For intermediate developers, this is where you stop hardcoding prompts and start treating them like reusable components.
import { PromptTemplate } from "llamaindex";
const supportPrompt = new PromptTemplate({
template: [
"You are a bank support assistant.",
"Answer using only the context below.",
"",
"Context:",
"{context}",
"",
"Question: {question}",
"",
"Return a concise answer with bullet points if needed.",
].join("\n"),
});
console.log(supportPrompt.format({
context: "The card replacement fee is $15 and arrives in 5 business days.",
question: "How much does card replacement cost?",
}));
- •Load documents and build an index that can feed your prompt.
This example uses a simple text file so you can run it locally without extra infrastructure. In production, this same pattern works with PDFs, databases, or API-fed content.
import fs from "node:fs/promises";
import { Document, VectorStoreIndex } from "llamaindex";
async function main() {
const text = await fs.readFile("./policy.txt", "utf-8");
const docs = [new Document({ text })];
const index = await VectorStoreIndex.fromDocuments(docs);
console.log("Indexed documents:", docs.length);
return index;
}
main();
- •Plug the template into a query engine with custom variables.
This is where the prompt becomes useful in practice: the retriever supplies context, while your app supplies the question and any instruction overrides. You get repeatable output without rebuilding the whole chain each time.
import { OpenAI } from "llamaindex";
import { PromptTemplate } from "llamaindex";
import { VectorStoreIndex, Document } from "llamaindex";
import fs from "node:fs/promises";
async function run() {
const text = await fs.readFile("./policy.txt", "utf-8");
const index = await VectorStoreIndex.fromDocuments([new Document({ text })]);
const prompt = new PromptTemplate({
template: "Context:\n{context}\n\nQuestion:\n{question}\n\nAnswer as a compliance analyst:",
});
const queryEngine = index.asQueryEngine({
llm: new OpenAI({ model: "gpt-4o-mini" }),
textQaTemplate: prompt,
});
const response = await queryEngine.query({
query: "What is the fee for wire transfers?",
});
console.log(response.toString());
}
run();
- •Build a reusable prompt factory for multiple scenarios.
In real systems, you usually need different prompts for support, compliance, summarization, or extraction. A factory keeps those variants centralized and makes it easy to test changes without touching business logic.
import { PromptTemplate } from "llamaindex";
type PromptKind = "support" | "compliance";
function createPrompt(kind: PromptKind) {
if (kind === "support") {
return new PromptTemplate({
template: "You are support.\nContext:\n{context}\n\nQuestion:\n{question}\n",
});
}
return new PromptTemplate({
template:
"You are compliance review.\nUse only this policy text:\n{context}\n\nIssue:\n{question}\n",
});
}
const compliancePrompt = createPrompt("compliance");
console.log(
compliancePrompt.format({
context: "Transactions above $10,000 require manual review.",
question: "Does this transfer need review?",
}),
);
Testing It
Run the script against a small policy.txt file first so you can inspect whether retrieval and formatting behave as expected. If the model ignores your instructions, check that your placeholders match exactly and that you are passing values under the same keys used in the template.
A good sanity test is to intentionally swap {question} for {query} once and confirm it fails or produces bad output; that proves your prompt wiring matters. Also inspect raw formatted strings before sending them to the LLM so you catch missing variables early.
If you're using a query engine, compare answers with and without textQaTemplate to verify your custom instructions are actually being applied. In production, I also log rendered prompts for a small sample of requests so regressions show up fast during prompt iteration.
Next Steps
- •Add separate templates for retrieval QA, summarization, and structured extraction.
- •Learn how to combine
PromptTemplatewith output parsers for JSON responses. - •Move prompt definitions into versioned files so product teams can review changes like code.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit