LangChain Tutorial (TypeScript): building prompt templates for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-prompt-templates-for-advanced-developerstypescript

This tutorial shows how to build reusable, composable prompt templates in LangChain TypeScript for real applications: structured inputs, partial variables, output formatting, and branching prompts. You need this when simple string prompts stop being enough and you want templates that are maintainable across multiple workflows, teams, and model providers.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • A package manager: npm, pnpm, or yarn
  • An OpenAI API key set as OPENAI_API_KEY
  • These packages:
    • langchain
    • @langchain/openai
    • zod
    • typescript
    • tsx or another TypeScript runner

Step-by-Step

  1. Start by installing the dependencies and setting up a minimal TypeScript project. The important part here is using the current LangChain package split: prompt classes come from langchain, while models come from provider packages like @langchain/openai.
npm init -y
npm install langchain @langchain/openai zod
npm install -D typescript tsx @types/node
  1. Create a basic prompt template with typed input variables. This is the foundation: instead of concatenating strings manually, you define placeholders and let LangChain handle formatting consistently.
import { ChatPromptTemplate } from "langchain/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a senior banking assistant. Answer precisely."],
  ["human", "Customer issue: {issue}\nAccount type: {accountType}"],
]);

async function main() {
  const formatted = await prompt.formatMessages({
    issue: "Card payment declined",
    accountType: "business checking",
  });

  console.log(formatted);
}

main();
  1. Add partial variables when some context stays constant across requests. This is the pattern you want for production prompts where policy text, company tone, or compliance instructions should not be repeated at every call site.
import { ChatPromptTemplate } from "langchain/prompts";

const basePrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a claims reviewer for {company}. Follow policy: {policy}"],
  ["human", "Claim summary: {summary}\nRisk level: {riskLevel}"],
]);

const prompt = basePrompt.partial({
  company: "Topiax Insurance",
  policy: "Do not approve claims missing required documents.",
});

async function main() {
  const messages = await prompt.formatMessages({
    summary: "Water damage in kitchen after pipe burst",
    riskLevel: "medium",
  });

  console.log(messages);
}

main();
  1. Use structured output instructions inside the template when you need machine-readable responses. In practice, this means giving the model a schema and telling it to return only JSON that matches it.
import { ChatPromptTemplate } from "langchain/prompts";
import { z } from "zod";

const schema = z.object({
  decision: z.enum(["approve", "deny", "review"]),
  reason: z.string(),
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    "You are a compliance analyst. Return only valid JSON matching this schema:\n{schema}",
  ],
  ["human", "Case details:\n{caseText}"],
]).partial({
  schema: schema.toString(),
});

async function main() {
  const messages = await prompt.formatMessages({
    caseText:
      "Customer requested refund for duplicate charge on debit card; merchant confirmed error.",
  });

  console.log(messages.map((m) => m.content));
}

main();
  1. Chain the prompt into an actual model call so you can test end-to-end behavior. This is where template design matters most, because poor prompts become obvious once they hit a live model.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a fraud operations assistant."],
  ["human", "{question}"],
]);

async function main() {
  const chain = prompt.pipe(llm);

  const response = await chain.invoke({
    question: "Summarize why this transaction may require manual review.",
  });

  console.log(response.content);
}

main();
  1. Build branching templates with message placeholders when you need multi-turn context. This is useful for advanced assistants where prior conversation or retrieved context should be injected without rewriting the whole template each time.
import {
  ChatPromptTemplate,
  MessagesPlaceholder,
} from "langchain/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are an internal support assistant."],
  new MessagesPlaceholder("history"),
  ["human", "{input}"],
]);

async function main() {
    const messages = await prompt.formatMessages({
      history: [
        { role: "user", content: "My transfer failed." },
        { role: "assistant", content: "What error did you see?" },
      ],
      input: "It says beneficiary verification failed.",
    });

    console.log(messages.map((m) => `${m.getType()}: ${m.content}`));
}

main();

Testing It

Run each script with npx tsx <file>.ts and confirm the formatted messages contain exactly the variables you expect. For the model-backed example, verify that temperature is set to 0 if you want stable outputs during testing.

If you use partial variables correctly, changing shared policy text in one place should update every downstream call without touching individual handlers. For message placeholders, make sure prior turns appear in order before the latest user input.

A good test is to intentionally omit one required variable and confirm LangChain throws early instead of sending a malformed request to the model.

Next Steps

  • Add output parsers and Zod validation so your templates produce typed results instead of raw text.
  • Learn about RunnableSequence and LCEL composition for larger agent workflows.
  • Combine prompt templates with retrievers so your system prompts can inject grounded context per request

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides