LangGraph Tutorial (TypeScript): running agents in parallel for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphrunning-agents-in-parallel-for-beginnerstypescript

This tutorial shows you how to build a LangGraph workflow in TypeScript that runs two agents in parallel, waits for both results, and then merges them into one final answer. You need this when one agent alone is too slow or too narrow, and you want independent reasoning paths before making a decision.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • npm or pnpm
  • An OpenAI API key
  • These packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • zod

Install them like this:

npm install @langchain/langgraph @langchain/openai @langchain/core zod
npm install -D typescript tsx @types/node

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a small state object that can hold the original question, both agent outputs, and the final merged response. Keep the state shape explicit so your graph stays easy to debug.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const GraphState = Annotation.Root({
  question: Annotation<string>(),
  research: Annotation<string>(),
  critique: Annotation<string>(),
  finalAnswer: Annotation<string>(),
});
  1. Create two independent agents. In this example, one agent does research and the other acts as a critic, but the pattern works for any pair of parallel tasks.
const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function researchAgent(state: typeof GraphState.State) {
  const res = await model.invoke([
    { role: "system", content: "You are a concise research assistant." },
    { role: "user", content: `Research this question: ${state.question}` },
  ]);
  return { research: res.content.toString() };
}

async function critiqueAgent(state: typeof GraphState.State) {
  const res = await model.invoke([
    { role: "system", content: "You are a strict reviewer." },
    { role: "user", content: `Critique this question from a risk perspective: ${state.question}` },
  ]);
  return { critique: res.content.toString() };
}
  1. Add a merge node that waits for both branches to finish. This is where parallel execution becomes useful, because the final answer can combine two different perspectives instead of relying on one pass.
async function mergeAgent(state: typeof GraphState.State) {
  const res = await model.invoke([
    {
      role: "system",
      content:
        "You combine research and critique into one clear answer.",
    },
    {
      role: "user",
      content:
        `Question: ${state.question}\n\nResearch:\n${state.research}\n\nCritique:\n${state.critique}`,
    },
  ]);
  return { finalAnswer: res.content.toString() };
}
  1. Build the graph with two branches coming out of START. In LangGraph, separate nodes from the same start point run independently, and the merge node can consume both outputs once they are ready.
const graph = new StateGraph(GraphState)
  .addNode("researchAgent", researchAgent)
  .addNode("critiqueAgent", critiqueAgent)
  .addNode("mergeAgent", mergeAgent)
  .addEdge(START, "researchAgent")
  .addEdge(START, "critiqueAgent")
  .addEdge("researchAgent", "mergeAgent")
  .addEdge("critiqueAgent", "mergeAgent")
  .addEdge("mergeAgent", END);
  1. Compile and run the graph with an initial question. If everything is wired correctly, both agents will execute in parallel and the merge step will produce one final response.
const app = graph.compile();

async function main() {
  const result = await app.invoke({
    question: "Should we allow instant loan approvals for small-ticket loans?",
    research: "",
    critique: "",
    finalAnswer: "",
  });

  console.log("\nFINAL ANSWER:\n");
  console.log(result.finalAnswer);
}

main().catch(console.error);
  1. Put it all together in one file and run it with tsx. This is the simplest way to test parallel branching without adding extra infrastructure.
// parallel-agents.ts
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const GraphState = Annotation.Root({
  question: Annotation<string>(),
  research: Annotation<string>(),
  critique: Annotation<string>(),
});

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

async function researchAgent(state: typeof GraphState.State) {
  const res = await model.invoke([
    { role: "system", content: "You are a concise research assistant." },
    { role: "user", content: `Research this question: ${state.question}` },
  ]);
}

Testing It

Run the file with:

npx tsx parallel-agents.ts

If it works, you should see one final answer that clearly reflects both the research output and the critique output. To confirm parallel behavior, add temporary console.log() statements at the start of each agent and watch both fire before the merge step completes.

If one branch fails silently, check that both edges originate from START and that your merge node reads fields written by both branches. Also make sure your state keys are initialized consistently; missing keys are a common source of confusion when beginners wire their first LangGraph fan-out.

Next Steps

  • Add structured outputs with zod so each agent returns typed data instead of plain text.
  • Replace one branch with a tool-using agent that fetches internal policy or customer data.
  • Add conditional routing so only some questions fan out to parallel agents.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides