LangGraph Tutorial (Python): building prompt templates for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphbuilding-prompt-templates-for-advanced-developerspython

This tutorial shows how to build prompt templates inside a LangGraph workflow in Python, then route state through those prompts in a way that is easy to test and extend. You need this when your agent has multiple decision points, different roles, or structured outputs that should be assembled consistently instead of hand-written inside each node.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-core
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with StateGraph, nodes, and edges in LangGraph

Install the packages:

pip install langgraph langchain-core langchain-openai

Step-by-Step

  1. Start by defining the graph state and the prompt templates you want to reuse. The key pattern is to keep prompts outside node logic so they can be tested and swapped without changing the graph structure.
from typing import TypedDict, Annotated
import operator

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END


class AgentState(TypedDict):
    topic: str
    audience: str
    draft: str
    review: str


draft_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You write concise technical drafts for {audience}."),
        ("human", "Write a clear first draft about: {topic}"),
    ]
)

review_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a strict reviewer for technical writing."),
        ("human", "Review this draft for clarity and gaps:\n\n{draft}"),
    ]
)
  1. Create a model client and node functions that format prompt templates with graph state. Each node should do one job: render the prompt, call the model, and return only the fields it owns.
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)


def write_draft(state: AgentState):
    messages = draft_prompt.format_messages(
        topic=state["topic"],
        audience=state["audience"],
    )
    response = llm.invoke(messages)
    return {"draft": response.content}


def review_draft(state: AgentState):
    messages = review_prompt.format_messages(draft=state["draft"])
    response = llm.invoke(messages)
    return {"review": response.content}
  1. Build the graph by wiring the nodes together in sequence. This keeps prompt construction inside nodes while LangGraph handles execution flow.
builder = StateGraph(AgentState)

builder.add_node("write_draft", write_draft)
builder.add_node("review_draft", review_draft)

builder.add_edge(START, "write_draft")
builder.add_edge("write_draft", "review_draft")
builder.add_edge("review_draft", END)

graph = builder.compile()
  1. Run the graph with an initial state and inspect both outputs. Notice that the prompts are parameterized by state, which makes this pattern useful for multi-audience systems.
result = graph.invoke(
    {
        "topic": "building prompt templates in LangGraph",
        "audience": "advanced Python developers",
        "draft": "",
        "review": "",
    }
)

print("DRAFT:\n", result["draft"])
print("\nREVIEW:\n", result["review"])
  1. If you want stricter control over prompt shape, use message placeholders and partial variables. This is useful when some context is fixed at build time and other parts come from runtime state.
base_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are writing for {audience}."),
        ("human", "{instruction}"),
    ]
)

specialized_prompt = base_prompt.partial(audience="advanced developers")

messages = specialized_prompt.format_messages(
    instruction="Explain why prompt templates should live outside node logic."
)

for msg in messages:
    print(f"{msg.type}: {msg.content}")

Testing It

Run the script and confirm that both draft and review are present in the final returned state. If you get an API error, check that OPENAI_API_KEY is exported in your shell before running Python.

A good test is to swap the topic value and verify that only the generated text changes while the graph code stays untouched. You should also confirm that each node returns only its own keys; if a node overwrites unrelated state, your workflow will become harder to reason about.

For local validation without spending tokens, replace ChatOpenAI with a small fake model wrapper or mock llm.invoke() in unit tests. That lets you assert prompt formatting separately from model behavior.

Next Steps

  • Add conditional edges so different prompts run based on classification results
  • Use structured outputs with Pydantic models for stronger downstream contracts
  • Split prompts into versioned modules so product teams can update copy without touching graph logic

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides