How to Fix 'prompt template error in production' in LangGraph (TypeScript)
If you’re seeing prompt template error in production in LangGraph, it usually means your node tried to format a prompt with missing or misnamed variables at runtime. In TypeScript, this often shows up only after deployment because local test inputs happen to satisfy the template, while real traffic does not.
The actual failure is usually thrown by LangChain’s prompt layer, not LangGraph itself. You’ll see errors like Error: Missing value for input variable "messages" or Error: Invalid prompt schema, and the stack trace will point into a ChatPromptTemplate or PromptTemplate call inside a graph node.
The Most Common Cause
The #1 cause is a mismatch between the variables your template expects and the object you pass into .invoke(). In LangGraph, this happens a lot when a node returns state in one shape, but the next node formats a prompt with different keys.
Here’s the broken pattern:
| Broken | Fixed |
|---|---|
Template expects {question} but runtime passes {input} | Align the state key with the template variable |
Node reads state.query but graph state stores state.question | Use one canonical field name end-to-end |
// BROKEN
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a bank support assistant."],
["human", "{question}"],
]);
export async function answerNode(state: { input: string }) {
// Runtime error:
// Error: Missing value for input variable "question"
const formatted = await prompt.formatMessages({
input: state.input,
});
return { messages: formatted };
}
// FIXED
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a bank support assistant."],
["human", "{question}"],
]);
export async function answerNode(state: { question: string }) {
const formatted = await prompt.formatMessages({
question: state.question,
});
return { messages: formatted };
}
In production, this often happens after a refactor where you renamed state fields but missed one prompt node. LangGraph won’t catch that at compile time if your state typing is loose or you’re passing any.
Other Possible Causes
1) Passing the wrong shape into ChatPromptTemplate.fromMessages()
If you build prompts dynamically, it’s easy to pass malformed message tuples.
// BROKEN
const prompt = ChatPromptTemplate.fromMessages([
["system", "Assistant for claims"],
["human", "{claimText}", "extra"], // invalid tuple shape
]);
// FIXED
const prompt = ChatPromptTemplate.fromMessages([
["system", "Assistant for claims"],
["human", "{claimText}"],
]);
This can surface as Invalid prompt schema or a formatting error depending on where it fails.
2) Using optional fields without guarding them
A production request may omit fields that were always present in your tests.
// BROKEN
type State = { customerName?: string };
const prompt = ChatPromptTemplate.fromMessages([
["human", "Hello {customerName}"],
]);
await prompt.formatMessages({ customerName: undefined });
// FIXED
type State = { customerName?: string };
if (!state.customerName) {
throw new Error("Missing customerName in graph state");
}
await prompt.formatMessages({ customerName: state.customerName });
If the field is optional, validate it before formatting. Don’t let the template discover bad input first.
3) Confusing messages arrays with template variables
LangGraph state often contains messages, but your template may still need separate scalar variables.
// BROKEN
await prompt.formatMessages({
messages: state.messages,
});
// FIXED
await prompt.formatMessages({
messages: state.messages,
accountId: state.accountId,
});
A messages array is not automatically mapped into {accountId} or {summary}. Template variables must be passed explicitly.
4) Mixing plain strings and message placeholders incorrectly
This shows up when using ChatPromptTemplate with placeholders like {history} but passing raw text instead of an array of messages.
// BROKEN
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a fraud analyst"],
["placeholder", "{history}"],
]);
await prompt.formatMessages({
history: "user asked about chargeback",
});
// FIXED
await prompt.formatMessages({
history: [
{ role: "user", content: "user asked about chargeback" },
{ role: "assistant", content: "asked for transaction id" },
],
});
Placeholders expect the type they were designed for. If you use message history placeholders, pass message objects, not a string blob.
How to Debug It
- •
Print the exact variables before formatting
console.log("prompt vars", { question: state.question, accountId: state.accountId, messagesCount: state.messages?.length, });Compare that output with the variables used in the template string.
- •
Inspect the template itself Search for
{...}tokens in yourChatPromptTemplateorPromptTemplate. If you see{claimSummary}, make sure that key exists in every code path. - •
Turn on stack traces around the failing node Wrap the node body so you can isolate whether failure comes from graph routing or prompt formatting.
try { return await answerNode(state); } catch (err) { console.error("answerNode failed", err); throw err; } - •
Validate graph input at the boundary Use Zod or manual guards before entering LangGraph.
import { z } from "zod"; const StateSchema = z.object({ question: z.string().min(1), accountId: z.string().min(1), });
Prevention
- •Keep one canonical state schema for the whole graph. If your template uses
{question}, don’t rename it toinputin another node. - •Validate node inputs before calling
format()orformatMessages(). Fail early with a clear message instead of letting LangChain throw deep inside execution. - •Add integration tests that run real graph inputs through each branch. Mocked unit tests miss missing-field bugs because they usually use idealized data.
If you want to stop this class of bug completely, treat prompts like typed interfaces. In LangGraph TypeScript projects, most “production” prompt errors are just schema drift between nodes and templates.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit