LangGraph Tutorial (Python): running agents in parallel for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphrunning-agents-in-parallel-for-advanced-developerspython

This tutorial shows you how to run multiple LangGraph agents in parallel, collect their outputs, and merge them into a single result. You need this when one agent is not enough: for example, one agent can extract facts, another can classify risk, and a third can draft a response without blocking each other.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-openai
  • openai API key
  • Basic familiarity with LangGraph StateGraph, nodes, edges, and reducers
  • A terminal with pip installed

Install the packages:

pip install langgraph langchain-openai openai

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start by defining a shared state that can hold outputs from multiple agents. The key part is using reducers so parallel branches can write into the same field without clobbering each other.
from typing import TypedDict, Annotated
from operator import add

class ParallelState(TypedDict):
    input_text: str
    findings: Annotated[list[str], add]
    summary: str
  1. Create two agent nodes that do different work on the same input. In this example, one extracts factual observations and the other produces a short summary.
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def fact_agent(state: ParallelState) -> dict:
    prompt = f"Extract 3 concrete facts from this text:\n\n{state['input_text']}"
    result = llm.invoke(prompt)
    return {"findings": [f"facts: {result.content}"]}

def summary_agent(state: ParallelState) -> dict:
    prompt = f"Summarize this text in one sentence:\n\n{state['input_text']}"
    result = llm.invoke(prompt)
    return {"summary": result.content}
  1. Build the graph so both agents run after the same entry point. Use START to fan out into both nodes, then route both into a join node that finalizes the response.
from langgraph.graph import StateGraph, START, END

def join_node(state: ParallelState) -> dict:
    return state

graph_builder = StateGraph(ParallelState)
graph_builder.add_node("fact_agent", fact_agent)
graph_builder.add_node("summary_agent", summary_agent)
graph_builder.add_node("join", join_node)

graph_builder.add_edge(START, "fact_agent")
graph_builder.add_edge(START, "summary_agent")
graph_builder.add_edge("fact_agent", "join")
graph_builder.add_edge("summary_agent", "join")
graph_builder.add_edge("join", END)

app = graph_builder.compile()
  1. Run the graph with input data and inspect the merged output. Because findings uses an additive reducer, both branches can contribute to the same list safely.
result = app.invoke(
    {
        "input_text": "The customer submitted a claim after a car accident on Monday. The policy was active and premiums were paid.",
        "findings": [],
        "summary": "",
    }
)

print("Summary:", result["summary"])
print("Findings:", result["findings"])
  1. If you want true concurrency at scale, keep each branch stateless and deterministic where possible. That makes parallel execution easier to reason about and prevents hidden dependencies between agents.
def risk_agent(state: ParallelState) -> dict:
    prompt = f"Assess whether this text implies low, medium, or high risk:\n\n{state['input_text']}"
    result = llm.invoke(prompt)
    return {"findings": [f"risk: {result.content}"]}

graph_builder = StateGraph(ParallelState)
graph_builder.add_node("fact_agent", fact_agent)
graph_builder.add_node("summary_agent", summary_agent)
graph_builder.add_node("risk_agent", risk_agent)
graph_builder.add_node("join", join_node)

graph_builder.add_edge(START, "fact_agent")
graph_builder.add_edge(START, "summary_agent")
graph_builder.add_edge(START, "risk_agent")
graph_builder.add_edge("fact_agent", "join")
graph_builder.add_edge("summary_agent", "join")
graph_builder.add_edge("risk_agent", "join")
graph_builder.add_edge("join", END)

Testing It

Run the script locally and confirm all three branches produce output for the same input. You should see one summary plus multiple entries in findings, with no overwritten data.

If you get empty fields back, check that your reducer is configured correctly with Annotated[list[str], add]. If you get API errors, verify your OpenAI key and model name first.

A good test is to swap in a slow prompt or a mocked LLM response and confirm the graph still returns only after all branches finish. That tells you your join pattern is working as intended.

Next Steps

  • Add conditional routing so only some branches run based on document type or risk score
  • Replace simple string outputs with structured Pydantic models for production validation
  • Add retries and timeouts around each node for resilient agent execution

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides