LangGraph Tutorial (TypeScript): building prompt templates for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphbuilding-prompt-templates-for-advanced-developerstypescript

This tutorial shows you how to build reusable prompt templates inside a LangGraph TypeScript workflow, then wire them into a graph that can switch prompts based on state. You need this when your agent has multiple behaviors — for example, a support triage prompt, a compliance prompt, and a summarization prompt — and you want those prompts managed cleanly instead of hardcoded inside node logic.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • @langchain/langgraph
  • @langchain/core
  • @langchain/openai
  • An OpenAI API key in OPENAI_API_KEY
  • A working TypeScript runtime such as tsx or ts-node
  • Basic familiarity with LangGraph nodes, edges, and state

Step-by-Step

  1. Start by defining the state your graph will carry. For prompt templating, the important part is to keep structured fields like task, tone, and input separate so the template can render them predictably.
import { Annotation } from "@langchain/langgraph";

export const GraphState = Annotation.Root({
  input: Annotation<string>(),
  task: Annotation<string>(),
  tone: Annotation<string>(),
  prompt: Annotation<string>(),
  output: Annotation<string>(),
});
  1. Build a prompt factory instead of embedding strings directly in nodes. This gives you one place to control formatting, instructions, and variable interpolation.
import { ChatPromptTemplate } from "@langchain/core/prompts";

export function buildPrompt(task: string) {
  return ChatPromptTemplate.fromMessages([
    [
      "system",
      "You are a senior assistant for regulated enterprise workflows. Follow the task strictly.",
    ],
    [
      "system",
      "Task: {task}\nTone: {tone}\nRules: be concise, precise, and avoid speculation.",
    ],
    ["human", "{input}"],
  ]).partial({ task });
}
  1. Add a node that renders the prompt from graph state. This is where advanced developers usually get value: the graph decides which template to use, but the node only concerns itself with filling variables.
import type { StateGraph } from "@langchain/langgraph";
import { GraphState } from "./state";
import { buildPrompt } from "./prompt";

export async function renderPromptNode(state: typeof GraphState.State) {
  const prompt = buildPrompt(state.task);
  const messages = await prompt.formatMessages({
    input: state.input,
    tone: state.tone,
  });

  return {
    prompt: messages.map((m) => `${m.role}: ${m.content}`).join("\n"),
  };
}
  1. Add an LLM node that consumes the rendered messages directly. Using formatMessages() keeps the model input aligned with chat semantics instead of flattening everything too early.
import { ChatOpenAI } from "@langchain/openai";
import type { AIMessage } from "@langchain/core/messages";
import type { StateGraph } from "@langchain/langgraph";
import { GraphState } from "./state";
import { buildPrompt } from "./prompt";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

export async function answerNode(state: typeof GraphState.State) {
  const prompt = buildPrompt(state.task);
  const messages = await prompt.formatMessages({
    input: state.input,
    tone: state.tone,
  });

  const response = await model.invoke(messages);
  return {
    output: (response as AIMessage).content.toString(),
  };
}
  1. Wire the nodes into a graph and compile it. Keep the flow simple here: start by rendering the prompt, then call the model, then finish.
import { StateGraph, START, END } from "@langchain/langgraph";
import { GraphState } from "./state";
import { renderPromptNode } from "./renderPromptNode";
import { answerNode } from "./answerNode";

const graph = new StateGraph(GraphState)
  .addNode("renderPrompt", renderPromptNode)
  .addNode("answer", answerNode)
  .addEdge(START, "renderPrompt")
  .addEdge("renderPrompt", "answer")
  .addEdge("answer", END);

export const app = graph.compile();
  1. Run the graph with different tasks to prove your templates are reusable. The same workflow can now serve multiple enterprise use cases without changing node code.
import { app } from "./graph";

async function main() {
  const result = await app.invoke({
    input: "Summarize this customer complaint into one sentence.",
    task: "Summarize customer-facing text for support triage",
    tone: "neutral",
    prompt: "",
    output: "",
  });

  console.log(result.output);
}

main().catch(console.error);

Testing It

Run the script once with a short input and once with a more complex one. You should see stable formatting in the rendered prompt and consistent output behavior because temperature is set to 0.

If you want to inspect the intermediate template output, log state.prompt after renderPromptNode returns it or add a temporary debug node between rendering and inference. That makes it obvious whether your variables are being injected correctly.

Try changing only the task field while keeping the rest of the graph unchanged. If your template design is good, you should get different model behavior without touching any node implementation.

Next Steps

  • Add conditional routing so different tasks select different templates
  • Use MessagesAnnotation when you need full conversation history in state
  • Add schema validation for template variables before calling .formatMessages()

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides