How to Fix 'prompt template error' in LangChain (Python)
What this error means
prompt template error in LangChain usually means the prompt you built does not match the variables you pass at runtime. It shows up when PromptTemplate, ChatPromptTemplate, or an LCEL chain tries to format a template and finds missing, extra, or malformed placeholders.
You typically hit it when wiring chain.invoke(...), switching from string prompts to chat prompts, or building templates with braces that LangChain thinks are variables.
The Most Common Cause
The #1 cause is a mismatch between template variables and the keys you pass into .invoke() or .format().
A common failure looks like this:
| Broken | Fixed |
|---|---|
Template expects question | Pass question |
Template expects topic | Pass topic |
from langchain_core.prompts import PromptTemplate
# BROKEN
prompt = PromptTemplate.from_template(
"Answer the question: {question}"
)
# This raises a formatting error because 'question' is missing
print(prompt.format(query="What is LangChain?"))
The error is usually one of these:
- •
KeyError: 'question' - •
ValueError: Invalid prompt schema; check for mismatched input variables - •
langchain_core.exceptions.OutputParserExceptionin downstream chains if the prompt never formats correctly
Here is the correct version:
from langchain_core.prompts import PromptTemplate
# FIXED
prompt = PromptTemplate.from_template(
"Answer the question: {question}"
)
print(prompt.format(question="What is LangChain?"))
If you are using LCEL, the same issue appears with dictionaries passed into a chain:
from langchain_core.prompts import ChatPromptTemplate
# BROKEN
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "Summarize this: {text}")
])
# Missing 'text'
result = prompt.invoke({"content": "Some text"})
Fixed:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "Summarize this: {text}")
])
result = prompt.invoke({"text": "Some text"})
Other Possible Causes
1. Unescaped braces in literal text
If your prompt includes JSON, Python dicts, or examples with {} braces, LangChain treats them as variables.
from langchain_core.prompts import PromptTemplate
# BROKEN
prompt = PromptTemplate.from_template(
'Return JSON like {"name": "John", "age": 30}'
)
Fix by escaping braces:
from langchain_core.prompts import PromptTemplate
# FIXED
prompt = PromptTemplate.from_template(
'Return JSON like {{"name": "John", "age": 30}}'
)
2. Wrong input type for chat prompts
ChatPromptTemplate expects a mapping of variables, not a raw string.
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("human", "Translate this: {text}")
])
# BROKEN
prompt.invoke("Hello")
Fixed:
prompt.invoke({"text": "Hello"})
3. Partial variables not set correctly
If your template has partials and you forget one, formatting fails later.
from langchain_core.prompts import PromptTemplate
# BROKEN if 'language' is never supplied or partially bound incorrectly
prompt = PromptTemplate(
template="Translate {text} to {language}",
input_variables=["text", "language"]
)
print(prompt.format(text="hello"))
Fixed with partial binding:
prompt = PromptTemplate(
template="Translate {text} to {language}",
input_variables=["text"]
).partial(language="French")
print(prompt.format(text="hello"))
4. Mismatched variable names across chain steps
This happens in LCEL when one step outputs input, but the prompt expects question.
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("human", "{question}")
])
# BROKEN: upstream key is 'input', not 'question'
chain_input = {"input": "What is AML?"}
Fix by aligning keys or mapping them explicitly:
from langchain_core.runnables import RunnableLambda
mapper = RunnableLambda(lambda x: {"question": x["input"]})
fixed_input = mapper.invoke({"input": "What is AML?"})
How to Debug It
- •
Print the template variables
- •Check what LangChain thinks the prompt needs.
- •Use:
print(prompt.input_variables) - •If you see
['question', 'context'], your runtime dict must contain both keys.
- •
Inspect the exact payload passed into
.invoke()- •Log the dictionary before it hits the chain.
- •Most errors are just key mismatches:
print(payload) chain.invoke(payload)
- •
Look for literal braces in your prompt text
- •Search for
{and}inside examples, JSON snippets, or markdown. - •If they are not placeholders, escape them as
{{and}}.
- •Search for
- •
Reduce the chain to only the prompt
- •Call
.format()or.invoke()on the prompt alone. - •If it fails there, the bug is in templating, not your model or retriever.
- •Call
Prevention
- •Keep variable names consistent across retriever, mapper, and prompt layers.
- •Treat every
{...}in prompt text as a placeholder unless escaped. - •Add unit tests that call:
- •
prompt.format(...) - •
chat_prompt.invoke(...)before wiring in the LLM.
- •
If you build chains for production systems, validate prompt inputs at boundaries. A small schema check with Pydantic or explicit key assertions will catch this class of error before it reaches runtime.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit