LangChain Tutorial (TypeScript): building prompt templates for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-prompt-templates-for-beginnerstypescript

This tutorial shows you how to build reusable prompt templates in LangChain for TypeScript, then plug them into a working chain that formats user input into consistent prompts. You need this when you want your app to stop hardcoding prompt strings and start generating structured, repeatable prompts that are easier to maintain and test.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project set up with tsconfig.json
  • These packages:
    • langchain
    • @langchain/openai
    • dotenv
    • typescript
  • An OpenAI API key in your environment as OPENAI_API_KEY
  • Basic familiarity with async/await and ES modules

Install the dependencies:

npm install langchain @langchain/openai dotenv
npm install -D typescript tsx @types/node

Step-by-Step

  1. Create a .env file and load your API key early in the app. If you skip this, the model client will fail at runtime.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

console.log("API key loaded:", !!process.env.OPENAI_API_KEY);
  1. Build a simple prompt template with variables. The key idea is to separate the prompt structure from the data you inject into it.
import { PromptTemplate } from "@langchain/core/prompts";

const beginnerPrompt = PromptTemplate.fromTemplate(`
You are a helpful tutor.
Explain {topic} to a beginner in {style} style.
Keep it under {maxWords} words.
`);

const formatted = await beginnerPrompt.format({
  topic: "prompt templates",
  style: "plain English",
  maxWords: "120",
});

console.log(formatted);
  1. Use ChatPromptTemplate when you want chat-style messages instead of one plain text block. This is the right choice for most LLM chat models because it preserves roles like system and user.
import { ChatPromptTemplate } from "@langchain/core/prompts";

const chatPrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a patient programming tutor."],
  ["user", "Explain {topic} to a beginner using {exampleCount} examples."],
]);

const messages = await chatPrompt.formatMessages({
  topic: "LangChain prompt templates",
  exampleCount: "2",
});

console.log(messages.map((m) => `${m.getType()}: ${m.content}`).join("\n"));
  1. Connect the prompt template to a model using a runnable chain. This gives you a clean pipeline: input data goes in, formatted prompt goes through, model output comes back out.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You explain software concepts clearly."],
  ["user", "Teach a beginner about {concept} in {tone} tone."],
]);

const parser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(parser);

const result = await chain.invoke({
  concept: "prompt templates in LangChain",
  tone: "friendly",
});

console.log(result);
  1. Add validation by keeping your template variables explicit and consistent. In real projects, this prevents silent failures when someone passes the wrong field name or forgets required input.
import { PromptTemplate } from "@langchain/core/prompts";

type PromptInput = {
  topic: string;
  audience: string;
};

const template = PromptTemplate.fromTemplate(
  "Explain {topic} for a {audience} audience."
);

async function buildPrompt(input: PromptInput) {
  return template.format(input);
}

const promptText = await buildPrompt({
  topic: "temperature settings",
  audience: "beginner",
});

console.log(promptText);

Testing It

Run each file with tsx so you can execute TypeScript directly without compiling first. For example, save one script as prompt-template.ts and run npx tsx prompt-template.ts.

Check two things first:

  • The formatted prompt text contains every variable value you passed in
  • The chain returns a readable response instead of an error about missing keys or credentials

If you get an authentication error, confirm OPENAI_API_KEY is loaded before the model is created. If you get a formatting error, check that your variable names in .format() or .invoke() match the placeholders exactly.

Next Steps

  • Learn partial variables so you can prefill stable parts of a template once and reuse them across requests
  • Move from ChatPromptTemplate to full chains with retrievers when your prompts need external context
  • Add structured output parsing so the model returns JSON instead of plain text

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides