LangChain Tutorial (TypeScript): building prompt templates for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-prompt-templates-for-intermediate-developerstypescript

This tutorial shows you how to build reusable prompt templates in LangChain with TypeScript, then wire them into a real chat model call. You need this when your prompts stop being one-off strings and start becoming versioned, testable, and safe to reuse across agents, tools, and workflows.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project initialized
  • langchain installed
  • @langchain/openai installed
  • An OpenAI API key set in your environment as OPENAI_API_KEY
  • A .env file or shell environment for local development
  • Basic familiarity with async/await and ES modules

Step-by-Step

  1. Install the dependencies and set up your project. Keep this clean from the start so your prompt templates are easy to move into production code later.
npm init -y
npm install langchain @langchain/openai dotenv
npm install -D typescript tsx @types/node
  1. Create a typed prompt template using ChatPromptTemplate. The key idea is to separate the prompt structure from the runtime values so you can reuse it across different inputs.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a senior underwriting assistant. Answer clearly and concisely."],
  ["human", "Explain {topic} for a {audience} audience in {tone} tone."],
]);

const formattedMessages = await prompt.formatMessages({
  topic: "policy exclusions",
  audience: "claims adjuster",
  tone: "practical",
});

console.log(formattedMessages.map((m) => `${m.getType()}: ${m.content}`).join("\n"));
  1. Add a model and pipe the template into it. This is the pattern you want in real applications because the prompt stays declarative while execution stays separate.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0.2,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a compliance analyst for insurance operations."],
  ["human", "Summarize the risk of {event} for {businessUnit} in 3 bullets."],
]);

const chain = prompt.pipe(model);

const response = await chain.invoke({
  event: "late premium payment",
  businessUnit: "small commercial underwriting",
});

console.log(response.content);
  1. Build a stronger template with multiple variables and structured instructions. Intermediate developers usually need this once prompts start serving different teams, regions, or policy types.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const prompt = ChatPromptTemplate.fromMessages([
  ["system", [
    "You write internal guidance for bank operations.",
    "Use plain language.",
    "Do not invent policy details."
  ].join(" ")],
  ["human", [
    "Policy area: {policyArea}",
    "Region: {region}",
    "Task: Draft a short explanation for frontline staff.",
    "Include one example and one caution."
  ].join("\n")],
]);

const chain = prompt.pipe(model);

const result = await chain.invoke({
  policyArea: "account opening verification",
  region: "UK",
});

console.log(result.content);
  1. Add input validation before invoking the chain. In production, you do not want missing variables turning into vague model failures or bad prompts reaching users.
type PromptInput = {
  policyArea: string;
  region: string;
};

function validateInput(input: Partial<PromptInput>): PromptInput {
  if (!input.policyArea) throw new Error("policyArea is required");
  if (!input.region) throw new Error("region is required");
  return input as PromptInput;
}

const safeInput = validateInput({
  policyArea: process.env.POLICY_AREA,
  region: process.env.REGION,
});

console.log(safeInput);
  1. Put it together in one executable script. This gives you a clean baseline you can extend into agents, tools, or API routes without rewriting your prompt logic.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";

type PromptInput = {
  topic: string;
  audience: string;
};

function validateInput(input: Partial<PromptInput>): PromptInput {
  if (!input.topic) throw new Error("topic is required");
  if (!input.audience) throw new Error("audience is required");
  return input as PromptInput;
}

async function main() {
  const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0.2 });

  const prompt = ChatPromptTemplate.fromMessages([
    ["system", "You are a technical writer for regulated industries."],
    ["human", "Write a concise explanation of {topic} for {audience}."],
  ]);

  const chain = prompt.pipe(model);
  const input = validateInput({ topic: process.env.TOPIC, audience: process.env.AUDIENCE });
  
  const response = await chain.invoke(input);
  console.log(response.content);
}

main();

Testing It

Run the script with your environment variables set, then confirm the output changes when you change the variables. For example, switch TOPIC from prompt templates to message history and verify the response follows the new input without changing code.

If you want to test the template itself before calling the model, use formatMessages() and inspect the rendered messages. That catches broken placeholders like {audince} before they become runtime issues.

For more confidence, add unit tests around your validation function and snapshot tests around rendered messages. That gives you coverage on both the contract and the final prompt shape.

Next Steps

  • Add MessagesPlaceholder to support conversation history in multi-turn workflows
  • Split prompts into reusable modules per use case, then compose them with .pipe()
  • Learn output parsers so your templates feed structured JSON into downstream services

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides