LangGraph Tutorial (TypeScript): building prompt templates for beginners
This tutorial shows you how to build reusable prompt templates inside a LangGraph workflow in TypeScript. You need this when your agent needs consistent prompts, safer user input handling, and a clean way to swap instructions without rewriting graph logic.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeortsx - •These packages:
- •
@langchain/langgraph - •
@langchain/core - •
@langchain/openai - •
zod
- •
- •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangGraph nodes, edges, and state
Step-by-Step
- •Start by defining a small state shape for the graph. For beginner-friendly prompt templates, keep the state focused on the user input, the generated prompt, and the model output.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const GraphState = Annotation.Root({
topic: Annotation<string>(),
prompt: Annotation<string>(),
answer: Annotation<string>(),
});
type GraphStateType = typeof GraphState.State;
- •Create a prompt template that turns raw user input into a clear instruction for the model. This is the part beginners usually miss: don’t send unstructured text straight to the LLM when you can normalize it first.
const template = ChatPromptTemplate.fromMessages([
["system", "You are a helpful tutor. Explain concepts simply and clearly."],
[
"human",
"Teach a beginner about {topic}. Use one analogy and one short example.",
],
]);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0.2,
});
- •Add a node that formats the prompt before calling the model. This keeps your graph readable and makes it easy to test prompt behavior independently of generation.
async function buildPrompt(state: GraphStateType) {
const messages = await template.formatMessages({
topic: state.topic,
});
const promptText = messages.map((m) => `${m.role}: ${m.content}`).join("\n");
return { prompt: promptText };
}
- •Add a second node that sends the formatted prompt to the model and stores the response. In production, I prefer keeping formatting and inference separate because it makes retries and debugging much easier.
async function generateAnswer(state: GraphStateType) {
const response = await llm.invoke(state.prompt);
return { answer: response.content.toString() };
}
- •Wire the nodes into a simple graph and compile it. This is enough to prove the pattern before you expand it into multi-step agent flows.
const graph = new StateGraph(GraphState)
.addNode("buildPrompt", buildPrompt)
.addNode("generateAnswer", generateAnswer)
.addEdge(START, "buildPrompt")
.addEdge("buildPrompt", "generateAnswer")
.addEdge("generateAnswer", END);
const app = graph.compile();
- •Run the graph with a beginner topic and print both the generated prompt and final answer. This gives you visibility into what the model actually received, which is useful when prompts behave badly.
const result = await app.invoke({
topic: "LangGraph state management",
});
console.log("PROMPT:\n", result.prompt);
console.log("\nANSWER:\n", result.answer);
Testing It
Run the file with tsx or ts-node after setting OPENAI_API_KEY. You should see two outputs: the formatted prompt string and then the model’s explanation of your chosen topic.
Check that your prompt includes both system guidance and the {topic} value you passed into state. If the answer is vague or off-target, tighten the wording in ChatPromptTemplate.fromMessages.
A good smoke test is to change topic to something very specific like "TypeScript generics" or "Zod schemas". If your template is working correctly, the structure of the answer should stay consistent while only the subject changes.
Next Steps
- •Add structured output with
zodso your graph returns typed data instead of plain text - •Split prompts into separate templates for routing, tool use, and final responses
- •Add memory or checkpointing so users can continue conversations across multiple graph runs
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit