How to Fix 'prompt template error during development' in LangGraph (Python)
Opening
prompt template error during development in LangGraph usually means your graph reached a node that tried to format a prompt, but one or more template variables were missing or mismatched. In practice, this shows up during local testing when you wire StateGraph, ChatPromptTemplate, and an LLM node together, then pass the wrong state shape into the prompt.
The error is rarely about LangGraph itself. It’s usually a KeyError, ValueError, or TypeError coming from LangChain’s prompt formatting inside a node function.
The Most Common Cause
The #1 cause is a mismatch between the keys in your graph state and the variables expected by ChatPromptTemplate.
Typical runtime failure looks like this:
- •
KeyError: "input" - •
ValueError: Prompt must accept variables {'messages'} - •
ValueError: Missing some input keys: {'question'}
Here’s the broken pattern:
| Broken | Fixed |
|---|---|
State key is query, prompt expects question | State key matches prompt variable |
| Node passes raw state without mapping | Node formats prompt with explicit fields |
# BROKEN
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
class State(TypedDict):
query: str
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{question}")
])
llm = ChatOpenAI(model="gpt-4o-mini")
def call_model(state: State):
# KeyError / ValueError here because state has no "question"
messages = prompt.format_messages(question=state["query"])
return {"answer": llm.invoke(messages).content}
# ... graph wiring omitted
# FIXED
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
class State(TypedDict):
question: str
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{question}")
])
llm = ChatOpenAI(model="gpt-4o-mini")
def call_model(state: State):
messages = prompt.format_messages(question=state["question"])
return {"answer": llm.invoke(messages).content}
If you use MessagesPlaceholder, the same rule applies. Your state must contain the exact key and type the template expects.
prompt = ChatPromptTemplate.from_messages([
("system", "Answer using the conversation history."),
("placeholder", "{messages}"),
])
If your graph returns {"message_history": [...]} instead of {"messages": [...]}, formatting fails immediately.
Other Possible Causes
1) Wrong message type in state
LangGraph nodes often pass around lists of message objects. If you store strings instead of HumanMessage / AIMessage, prompt assembly can break.
# BROKEN
state = {"messages": ["hello", "what is my balance?"]}
# FIXED
from langchain_core.messages import HumanMessage, AIMessage
state = {
"messages": [
HumanMessage(content="hello"),
HumanMessage(content="what is my balance?")
]
}
If you’re using message history, keep it typed as actual LangChain message objects.
2) Returning the wrong shape from a node
A LangGraph node should return updates to the state schema. If you return a string or nested object where the next node expects flat keys, prompt formatting breaks later.
# BROKEN
def extract_question(state):
return "What is my policy status?"
# FIXED
def extract_question(state):
return {"question": "What is my policy status?"}
This matters because downstream nodes read from state keys, not arbitrary return values.
3) Template variables don’t match partials or format args
If your prompt uses {customer_name} but you only pass {name}, LangChain raises a formatting error before the model call.
prompt = ChatPromptTemplate.from_template(
"Hello {customer_name}, your claim {claim_id} is pending."
)
# BROKEN
prompt.format(customer="Sam", claim_id="CLM-123")
# FIXED
prompt.format(customer_name="Sam", claim_id="CLM-123")
This is common when refactoring variable names across multiple nodes.
4) Mixing chat prompts with plain strings incorrectly
ChatPromptTemplate expects structured messages. If you feed it a plain string where it expects message tuples or placeholders, it can fail during construction or formatting.
# BROKEN
prompt = ChatPromptTemplate.from_messages([
"You are a support agent.",
"{question}"
])
# FIXED
prompt = ChatPromptTemplate.from_messages([
("system", "You are a support agent."),
("human", "{question}")
])
Use tuple-based message definitions unless you’re intentionally working with prebuilt message objects.
How to Debug It
- •
Print the exact state entering the failing node
Add logging right before prompt formatting.def call_model(state): print("STATE:", state) ... - •
Inspect the prompt variables
Check what your template actually requires.print(prompt.input_variables)If it says
['question', 'messages'], your state must supply both. - •
Run the node outside LangGraph first
Call it with a hardcoded dict.call_model({"question": "test"})If it fails here, the issue is prompt/state mismatch, not graph wiring.
- •
Check for message object types
If using conversation memory, verify each item is a LangChain message class.from langchain_core.messages import BaseMessage all(isinstance(m, BaseMessage) for m in state["messages"])
Prevention
- •Keep one source of truth for your graph state schema. Use
TypedDictor Pydantic models and do not invent new keys inside nodes. - •Match prompt variables exactly to state fields. Treat
{question}and{input}as different APIs, because they are. - •Add unit tests for every node that formats prompts. Test missing keys, wrong types, and empty message lists before wiring the full graph.
If you standardize state shape early, this error disappears fast. In LangGraph projects, most “prompt template” failures are just contract violations between nodes and templates.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit