LangGraph Tutorial (Python): building prompt templates for intermediate developers
This tutorial shows you how to build prompt templates inside a LangGraph workflow in Python, then wire them into a runnable graph with state, branching, and structured inputs. You need this when your prompts are no longer one-off strings and you want reusable, testable templates that can be swapped per node, per role, or per task.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •
langchain-openai - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with:
- •
StateGraph - •nodes and edges
- •
TypedDictor Pydantic-style state
- •
- •Optional but useful:
- •
python-dotenvfor local env loading
- •
Install the packages:
pip install langgraph langchain-core langchain-openai python-dotenv
Step-by-Step
- •Start by defining a small graph state and a prompt template function. The key idea is to keep the prompt outside the node logic so it can be reused and tested independently.
from typing import TypedDict
from langchain_core.prompts import ChatPromptTemplate
class AgentState(TypedDict):
topic: str
audience: str
draft: str
prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert technical writer."),
("human", "Write a concise explanation of {topic} for {audience}.")
])
formatted = prompt.format_messages(
topic="LangGraph state management",
audience="intermediate Python developers"
)
print(formatted)
- •Next, add an LLM and turn the prompt into a node function. This keeps the graph node thin: it formats inputs, calls the model, and writes output back into state.
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def write_draft(state: AgentState) -> AgentState:
messages = prompt.format_messages(
topic=state["topic"],
audience=state["audience"]
)
response = llm.invoke(messages)
return {
**state,
"draft": response.content
}
- •Build the LangGraph workflow with a single node first. Even for simple graphs, this gives you a clean execution boundary and makes it easy to extend later with review or routing steps.
from langgraph.graph import StateGraph, START, END
graph_builder = StateGraph(AgentState)
graph_builder.add_node("write_draft", write_draft)
graph_builder.add_edge(START, "write_draft")
graph_builder.add_edge("write_draft", END)
app = graph_builder.compile()
- •Run the graph with real input data. You should treat the prompt template as part of your application contract: if a required field is missing, the graph should fail early instead of producing garbage output.
result = app.invoke({
"topic": "prompt templates in LangGraph",
"audience": "intermediate developers",
"draft": ""
})
print(result["draft"])
- •Add a second template for refinement so you can see why templates matter in multi-step graphs. In production, this pattern is useful when one node drafts and another node rewrites based on policy, tone, or length constraints.
refine_prompt = ChatPromptTemplate.from_messages([
("system", "You are a strict editor."),
("human", "Improve this draft for clarity and structure:\n\n{draft}")
])
def refine_draft(state: AgentState) -> AgentState:
messages = refine_prompt.format_messages(draft=state["draft"])
response = llm.invoke(messages)
return {
**state,
"draft": response.content
}
graph_builder = StateGraph(AgentState)
graph_builder.add_node("write_draft", write_draft)
graph_builder.add_node("refine_draft", refine_draft)
graph_builder.add_edge(START, "write_draft")
graph_builder.add_edge("write_draft", "refine_draft")
graph_builder.add_edge("refine_draft", END)
app = graph_builder.compile()
- •If you want more control over template behavior, move to partial variables and reusable builders. This is the pattern I use when I need consistent system instructions across multiple graphs or tenants.
base_prompt = ChatPromptTemplate.from_messages([
("system", "{system_role}"),
("human", "{task}")
])
technical_writer_prompt = base_prompt.partial(
system_role="You are an expert technical writer for Python engineers."
)
messages = technical_writer_prompt.format_messages(
task="Explain how LangGraph prompt templates help separate concerns."
)
print(messages)
Testing It
Run the script with a valid OPENAI_API_KEY in your environment and confirm you get a non-empty draft field back from app.invoke(). Then change the topic value and verify the output changes without touching any node logic.
A good test is to deliberately remove one required input like audience and confirm your app fails immediately instead of silently formatting an invalid prompt. If you added the refinement node, compare outputs before and after that step to make sure each node is doing one job only.
For more confidence, print the formatted messages from each template during development. That catches bad placeholders early, which is where most prompt-template bugs show up in LangGraph apps.
Next Steps
- •Add conditional routing with
add_conditional_edges()so different prompts run for different document types. - •Replace string outputs with structured outputs using Pydantic models and
.with_structured_output(). - •Externalize prompts into versioned files so product teams can review them without editing Python code
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit