LangGraph Tutorial (TypeScript): building prompt templates for intermediate developers
This tutorial shows you how to build reusable prompt templates inside a LangGraph TypeScript workflow, then wire them into a simple graph node. You need this when your agent logic is more than one prompt and you want clean separation between system instructions, user input, and structured outputs.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •An OpenAI API key
- •These packages:
- •
@langchain/langgraph - •
@langchain/core - •
@langchain/openai - •
zod
- •
- •A TypeScript project with
tsconfig.jsonset up for ES modules or compatible CommonJS output - •Basic familiarity with LangGraph nodes and edges
Step-by-Step
- •Start by defining a prompt template that separates the system role from the user payload. For intermediate developers, the main goal is to keep prompts reusable and typed instead of hardcoding strings inside nodes.
import { ChatPromptTemplate } from "@langchain/core/prompts";
export const supportPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a banking support assistant. Answer clearly and concisely.",
],
[
"human",
"Customer issue: {issue}\nCustomer segment: {segment}\nTone: {tone}",
],
]);
- •Next, define the shape of the state your graph will pass around. This keeps your prompt inputs explicit and makes it easier to validate what each node expects.
import { z } from "zod";
export const GraphStateSchema = z.object({
issue: z.string(),
segment: z.string(),
tone: z.string().default("professional"),
});
export type GraphState = z.infer<typeof GraphStateSchema>;
- •Now create a model node that formats the prompt and sends it to an LLM. The important part is that the template is compiled from state values at runtime, so you can reuse the same template across multiple branches later.
import { ChatOpenAI } from "@langchain/openai";
import { supportPrompt } from "./prompt.js";
import type { GraphState } from "./state.js";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
export async function supportNode(state: GraphState) {
const messages = await supportPrompt.formatMessages({
issue: state.issue,
segment: state.segment,
tone: state.tone,
});
const response = await model.invoke(messages);
return {
response: response.content,
};
}
- •Build the LangGraph workflow around that node. Even if this graph is simple, this pattern scales because you can add routing, retries, or human review without rewriting the prompt logic.
import { StateGraph, START, END } from "@langchain/langgraph";
import { z } from "zod";
import { supportNode } from "./node.js";
const AppState = z.object({
issue: z.string(),
segment: z.string(),
tone: z.string().default("professional"),
response: z.string().optional(),
});
type AppStateType = z.infer<typeof AppState>;
const graph = new StateGraph(AppState)
.addNode("support", supportNode)
.addEdge(START, "support")
.addEdge("support", END);
export const app = graph.compile();
- •Finally, invoke the graph with real input and inspect the output. This gives you a production-style entry point where all prompt construction stays inside the node boundary.
import { app } from "./graph.js";
async function main() {
const result = await app.invoke({
issue: "My debit card was declined abroad.",
segment: "premium",
tone: "calm and reassuring",
});
console.log(result.response);
}
main().catch(console.error);
Testing It
Run the script with tsx, ts-node, or your preferred TypeScript runtime. If everything is wired correctly, you should see a concise banking-support response based on the three input fields.
To verify the template is working, change only one field at a time, such as tone or segment, and confirm the output changes accordingly. That tells you the graph is passing structured state into the prompt instead of relying on hidden globals.
If you get an API error, check that OPENAI_API_KEY is set in your environment before running the script. Also confirm that your package versions are compatible, since LangChain packages move quickly and mismatched versions can break imports.
Next Steps
- •Add a second prompt template for classification before generation, then route with conditional edges
- •Replace plain string output with a Zod schema and structured JSON parsing
- •Split prompts into domain-specific modules so compliance teams can review them independently
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit