LangGraph Tutorial (TypeScript): running agents in parallel for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphrunning-agents-in-parallel-for-advanced-developerstypescript

This tutorial shows you how to build a LangGraph workflow in TypeScript that runs multiple agents in parallel, waits for all of them, and then merges their outputs into one final answer. You need this when one agent is not enough: for example, when you want separate research, compliance, and summarization passes happening at the same time.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • @langchain/langgraph
  • @langchain/openai
  • @langchain/core
  • An OpenAI API key set as OPENAI_API_KEY
  • A project with ESM enabled or a TypeScript setup that supports import syntax
  • Basic familiarity with LangGraph state, nodes, and edges

Step-by-Step

  1. Start by defining a state shape that can hold the shared input plus each parallel agent’s output. The key idea is that each branch writes to its own field so there are no race conditions when the graph runs concurrently.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const GraphState = Annotation.Root({
  topic: Annotation<string>(),
  research: Annotation<string>(),
  riskReview: Annotation<string>(),
  summary: Annotation<string>(),
});

type GraphStateType = typeof GraphState.State;
  1. Create one node per agent. Each node receives the same input state, calls the model with a different prompt, and returns only its own field.
async function researchAgent(state: GraphStateType) {
  const response = await llm.invoke([
    new HumanMessage(`Research this topic for an enterprise team: ${state.topic}`),
  ]);

  return { research: response.content.toString() };
}

async function riskAgent(state: GraphStateType) {
  const response = await llm.invoke([
    new HumanMessage(`Review this topic for risks and failure modes: ${state.topic}`),
  ]);

  return { riskReview: response.content.toString() };
}

async function summaryAgent(state: GraphStateType) {
  const response = await llm.invoke([
    new HumanMessage(
      `Write a concise executive summary using this research:\n${state.research}\n\nRisk review:\n${state.riskReview}`
    ),
  ]);

  return { summary: response.content.toString() };
}
  1. Build a graph that fans out from START into both agents at once, then joins them before the final summarizer runs. LangGraph will execute the independent branches in parallel because they do not depend on each other’s outputs.
const graph = new StateGraph(GraphState)
  .addNode("researchAgent", researchAgent)
  .addNode("riskAgent", riskAgent)
  .addNode("summaryAgent", summaryAgent)
  .addEdge(START, "researchAgent")
  .addEdge(START, "riskAgent")
  .addEdge("researchAgent", "summaryAgent")
  .addEdge("riskAgent", "summaryAgent")
  .addEdge("summaryAgent", END);

const app = graph.compile();
  1. Invoke the graph with a real topic and print the merged result. In production, this pattern is useful when each branch has a different responsibility and you want deterministic orchestration instead of ad hoc Promise handling.
async function main() {
  const result = await app.invoke({
    topic: "Using AI agents for claims triage in insurance",
    research: "",
    riskReview: "",
    summary: "",
  });

  console.log("=== Research ===");
  console.log(result.research);
  console.log("\n=== Risk Review ===");
  console.log(result.riskReview);
  console.log("\n=== Summary ===");
  console.log(result.summary);
}

main().catch(console.error);
  1. If you want stricter coordination, add a gate node that only runs after both branches have completed successfully. This is useful when one branch may fail independently and you want to centralize fallback logic before continuing.
async function gatekeeper(state: GraphStateType) {
  if (!state.research || !state.riskReview) {
    throw new Error("Parallel branches did not produce all required outputs.");
  }

  return {};
}

const guardedGraph = new StateGraph(GraphState)
  .addNode("researchAgent", researchAgent)
  .addNode("riskAgent", riskAgent)
  .addNode("gatekeeper", gatekeeper)
  .addNode("summaryAgent", summaryAgent)
  .addEdge(START, "researchAgent")
  .addEdge(START, "riskAgent")
  .addEdge("researchAgent", "gatekeeper")
  .addEdge("riskAgent", "gatekeeper")
  .addEdge("gatekeeper", "summaryAgent")
  .addEdge("summaryAgent", END);

Testing It

Run the script with your API key set in the environment and verify that all three sections print non-empty text. The important thing to check is that research and riskReview are produced independently before summary is generated from both of them.

If you want to confirm parallel behavior, add timestamps inside each node and compare logs; both branch nodes should start without waiting on each other. Also test one branch returning slower than the other to make sure the join still completes correctly.

For deeper validation, change one prompt so it returns a short deterministic string and assert against it in a unit test. That gives you confidence your state wiring is correct before you plug in more expensive model calls.

Next Steps

  • Add structured output with Zod so each branch returns typed JSON instead of raw text
  • Introduce retries and fallbacks per node for resilient production workflows
  • Use conditional edges to route only certain topics through parallel agents

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides