LangGraph Tutorial (Python): running agents in parallel for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphrunning-agents-in-parallel-for-intermediate-developerspython

This tutorial shows how to fan out one user request into multiple LangGraph agents, run them in parallel, and merge the results back into a single response. You need this when one agent is too narrow for the job, like comparing providers, checking policy rules, or gathering independent evidence before making a decision.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with LangGraph state, nodes, and edges
  • A terminal and a virtual environment

Install the packages:

pip install langgraph langchain-openai

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a shared state that can hold the user input plus outputs from each branch. The important part here is that each parallel node writes to its own field so the branches do not overwrite each other.
from typing import TypedDict
from langgraph.graph import StateGraph, START, END

class AgentState(TypedDict):
    question: str
    research_a: str
    research_b: str
    final_answer: str
  1. Define two independent agent functions. In production these would call different prompts, tools, or even different models, but for this tutorial we keep them deterministic so you can run it as-written without external dependencies.
def agent_a(state: AgentState):
    q = state["question"]
    return {"research_a": f"Agent A analyzed: {q}. Focused on risk and compliance."}

def agent_b(state: AgentState):
    q = state["question"]
    return {"research_b": f"Agent B analyzed: {q}. Focused on cost and implementation."}
  1. Add a merge node that combines both outputs into one result. This is where you decide how to reconcile parallel work: concatenate, rank, validate, or pass both into a final LLM call.
def merge_results(state: AgentState):
    answer = (
        "Combined report:\n"
        f"- {state['research_a']}\n"
        f"- {state['research_b']}"
    )
    return {"final_answer": answer}
  1. Wire the graph so both agents run from the same start node in parallel, then feed into the merge step. LangGraph will execute both branches independently before continuing to the next node.
graph = StateGraph(AgentState)

graph.add_node("agent_a", agent_a)
graph.add_node("agent_b", agent_b)
graph.add_node("merge", merge_results)

graph.add_edge(START, "agent_a")
graph.add_edge(START, "agent_b")
graph.add_edge("agent_a", "merge")
graph.add_edge("agent_b", "merge")
graph.add_edge("merge", END)

app = graph.compile()
  1. Invoke the graph with a real input and inspect the merged output. Because both branches write to separate keys, the final node can safely read from both without race conditions in your state schema.
result = app.invoke({"question": "Should we approve this loan application?"})

print(result["research_a"])
print(result["research_b"])
print(result["final_answer"])
  1. If you want to swap in actual LLM-backed agents later, keep the same graph shape and replace the node bodies with model calls. The parallel structure stays identical; only the internals of each branch change.
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def llm_agent_a(state: AgentState):
    msg = llm.invoke(f"Analyze risk for: {state['question']}")
    return {"research_a": msg.content}

def llm_agent_b(state: AgentState):
    msg = llm.invoke(f"Analyze cost for: {state['question']}")
    return {"research_b": msg.content}

Testing It

Run the script and confirm that research_a, research_b, and final_answer are all present in the returned state. If one of the branch keys is missing, your state schema or node return values are wrong.

A good sanity check is to change the input question and verify both branches update independently. If you wire real LLM calls later, also test with logging enabled so you can see both nodes being executed before merge.

If you want stronger validation, add assertions after invoke():

assert "research_a" in result
assert "research_b" in result
assert "final_answer" in result

Next Steps

  • Add conditional routing so only some requests fan out into parallel branches.
  • Replace the merge step with an evaluator node that scores each branch before combining them.
  • Use tool-calling agents inside each branch for bank-grade workflows like KYC review, policy lookup, or claims triage.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides