How to Fix 'prompt template error' in LangGraph (Python)

By Cyprian AaronsUpdated 2026-04-22
prompt-template-errorlanggraphpython

Opening

prompt template error in LangGraph usually means the prompt object could not be rendered before the model call. In practice, this happens when your prompt variables do not match the state you pass into the graph, or when you hand LangChain a malformed template.

You’ll usually see it during node execution, especially in a StateGraph node that builds a ChatPromptTemplate, PromptTemplate, or MessagesPlaceholder.

The Most Common Cause

The #1 cause is a mismatch between template variables and the keys in your graph state.

LangGraph passes state into your node, but ChatPromptTemplate still expects exact variable names. If your prompt says {question} and your state has input, you’ll get an error like:

  • KeyError: 'question'
  • ValueError: Prompt missing required variables: {'question'}
  • langchain_core.prompts.base.InvalidPromptInput

Broken vs fixed pattern

BrokenFixed
Template expects question, state provides inputTemplate and state both use question
Node passes raw dict with wrong keysNode maps state explicitly before formatting
# BROKEN
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

class State(TypedDict):
    input: str

llm = ChatOpenAI(model="gpt-4o-mini")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{question}"),
])

def answer_node(state: State):
    # state has "input", but prompt expects "question"
    messages = prompt.format_messages(**state)
    return {"output": llm.invoke(messages).content}

graph = StateGraph(State)
graph.add_node("answer", answer_node)
graph.add_edge(START, "answer")
graph.add_edge("answer", END)
app = graph.compile()
# FIXED
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

class State(TypedDict):
    input: str

llm = ChatOpenAI(model="gpt-4o-mini")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{question}"),
])

def answer_node(state: State):
    messages = prompt.format_messages(question=state["input"])
    return {"output": llm.invoke(messages).content}

graph = StateGraph(State)
graph.add_node("answer", answer_node)
graph.add_edge(START, "answer")
graph.add_edge("answer", END)
app = graph.compile()

If you use MessagesPlaceholder, the same rule applies. This is a common failure mode:

prompt = ChatPromptTemplate.from_messages([
    ("system", "Answer clearly."),
    MessagesPlaceholder(variable_name="messages"),
])

If your state does not contain messages as a list of LangChain message objects, formatting will fail.

Other Possible Causes

1. Missing required variables in partial rendering

If your prompt has multiple placeholders and one is absent, LangChain throws a render-time error.

# BROKEN
prompt = ChatPromptTemplate.from_template(
    "Summarize {topic} for {audience}"
)

prompt.format(topic="payments")
# ValueError: Prompt missing required variables: {'audience'}
# FIXED
prompt.format(topic="payments", audience="compliance analysts")

2. Passing the wrong type into the template

A variable may exist, but be the wrong type. This shows up often with message history or nested objects.

# BROKEN
prompt = ChatPromptTemplate.from_messages([
    ("human", "{messages}")
])

prompt.format(messages="hello")  # string instead of list of messages

For message placeholders:

# FIXED
from langchain_core.messages import HumanMessage

prompt.format(messages=[HumanMessage(content="hello")])

3. Mixing f-strings with LangChain template syntax incorrectly

LangChain templates use {var} placeholders. If you pre-format strings or escape braces incorrectly, the parser can break.

# BROKEN
user_text = "{not_a_variable}"
prompt = ChatPromptTemplate.from_template(f"Reply to {user_text}")
# This can create unexpected placeholder parsing issues.
# FIXED
prompt = ChatPromptTemplate.from_template("Reply to {user_text}")
prompt.format(user_text=user_text)

If you need literal braces in the final text, escape them:

text = "Use {{braces}} literally"

4. Using old LangChain/LangGraph versions with incompatible prompt APIs

Version drift causes confusing errors around prompt construction and message formatting.

Typical symptoms:

  • TypeError: ... got an unexpected keyword argument
  • ValueError from prompt validation after upgrading one package only

Check that these packages are aligned:

pip show langgraph langchain-core langchain-openai

If one package is much newer than the others, pin compatible versions together.

How to Debug It

  1. Print the exact state entering the failing node

    def answer_node(state):
        print("STATE:", state)
        ...
    

    Confirm the key names match the template variables exactly.

  2. Inspect the prompt variables before invocation

    print(prompt.input_variables)
    

    If this prints ['question'], your node must supply question.

  3. Format the prompt outside LangGraph first

    Take the same data and run it in a plain Python script.

    prompt.format_messages(question="What is ACH?")
    

    If it fails here, the problem is not LangGraph. It’s your prompt definition or inputs.

  4. Check whether you are using messages or strings

    A lot of graph nodes pass plain strings where LangChain expects structured messages.

    • Use strings for {variable} templates.
    • Use lists of HumanMessage, AIMessage, etc. for MessagesPlaceholder.

Prevention

  • Keep graph state keys and prompt variables identical. If your node uses input, do not name the template variable question.
  • Validate prompt inputs inside each node before calling .format() or .invoke().
  • Pin compatible versions of langgraph, langchain-core, and provider packages together in requirements files.
  • Add small unit tests that call each node function directly with sample state before wiring it into the graph.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides