LangGraph Tutorial (Python): running agents in parallel for beginners
This tutorial shows you how to run multiple LangGraph agents in parallel, collect their outputs, and merge them into one result. You need this when a single user request should be handled by specialized agents at the same time, like one agent researching policy details while another drafts a customer response.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •
langchain-openai - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangGraph nodes, state, and edges
Install the packages:
pip install langgraph langchain-core langchain-openai
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Define a shared state that can hold the user prompt plus outputs from each parallel agent.
We also define a reducer for the output lists so LangGraph can merge updates from multiple branches.
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage
def append_list(left: list[str], right: list[str]) -> list[str]:
return left + right
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
research_notes: Annotated[list[str], append_list]
draft_notes: Annotated[list[str], append_list]
final_answer: str
- •Create two simple worker nodes that act like separate agents.
In real systems these can call different tools or models; here they generate useful intermediate outputs from the same input.
def research_agent(state: AgentState):
user_text = state["messages"][-1].content
return {
"research_notes": [
f"Research agent saw request: {user_text}",
"Key facts should be validated before responding.",
]
}
def draft_agent(state: AgentState):
user_text = state["messages"][-1].content
return {
"draft_notes": [
f"Draft agent saw request: {user_text}",
"Response should be short, clear, and action-oriented.",
]
}
- •Add a merge node that combines both branches into one final answer.
This node runs only after both parallel paths finish, which is the main pattern you want for beginner-friendly parallel execution.
def merge_results(state: AgentState):
research = "\n".join(state["research_notes"])
draft = "\n".join(state["draft_notes"])
final_answer = (
"Combined result:\n"
f"- Research summary:\n{research}\n\n"
f"- Draft guidance:\n{draft}"
)
return {"final_answer": final_answer}
- •Build the graph with one split into two parallel branches and one join back into the merge node.
The important part is that both agents start from the same input and write to separate keys in shared state.
builder = StateGraph(AgentState)
builder.add_node("research_agent", research_agent)
builder.add_node("draft_agent", draft_agent)
builder.add_node("merge_results", merge_results)
builder.add_edge(START, "research_agent")
builder.add_edge(START, "draft_agent")
builder.add_edge("research_agent", "merge_results")
builder.add_edge("draft_agent", "merge_results")
builder.add_edge("merge_results", END)
graph = builder.compile()
- •Run the graph with a user message and inspect the final state.
You should see both branches contribute data before the merged answer is produced.
result = graph.invoke(
{
"messages": [HumanMessage(content="Explain whether this policy covers water damage.")],
"research_notes": [],
"draft_notes": [],
"final_answer": "",
}
)
print(result["final_answer"])
- •Replace the fake workers with real LLM-backed agents when you're ready.
The graph structure stays the same; only the node internals change.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
def llm_research_agent(state: AgentState):
prompt = f"Research this request and return concise bullet points:\n{state['messages'][-1].content}"
response = llm.invoke([HumanMessage(content=prompt)])
return {"research_notes": [response.content]}
def llm_draft_agent(state: AgentState):
prompt = f"Draft a customer-friendly response to:\n{state['messages'][-1].content}"
response = llm.invoke([HumanMessage(content=prompt)])
return {"draft_notes": [response.content]}
Testing It
Run the script and confirm that both research_notes and draft_notes are present in the returned state. If either list is empty, your branch wiring or reducers are wrong.
A good sanity check is to print timestamps inside each node and confirm they are both reached during one invocation. For real LLM calls, also verify that each branch gets the same input message.
If you swap in llm_research_agent and llm_draft_agent, make sure your API key is available in the environment before running. If you get model errors, start by testing one node at a time outside LangGraph.
Next Steps
- •Add tool calling inside each branch so one agent can search documents while another summarizes them.
- •Use conditional routing to decide when to run branches in parallel versus sequentially.
- •Persist graph state with a checkpointer so parallel workflows can resume after failures or interruptions
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit