LangGraph Tutorial (TypeScript): running agents in parallel for intermediate developers
This tutorial shows you how to run multiple LangGraph agents in parallel with TypeScript and merge their outputs into one result. You need this when one agent is not enough, for example when you want separate research, validation, and summarization branches to work at the same time.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
langgraph - •
@langchain/openai - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangGraph nodes, edges, and state
- •A project set up with ESM support
Step-by-Step
- •Start with a minimal TypeScript project and install the packages. Use ESM so the LangGraph imports work cleanly.
npm init -y
npm install langgraph @langchain/openai @langchain/core
npm install -D typescript tsx @types/node
- •Define a shared state that can collect results from parallel branches. The important part is using reducers so multiple nodes can write into the same array safely.
import { Annotation, START, END, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const GraphState = Annotation.Root({
question: Annotation<string>(),
research: Annotation<string[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
risks: Annotation<string[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
summary: Annotation<string>({
reducer: (_, right) => right,
default: () => "",
}),
});
- •Create two independent agent nodes that run in parallel. Both read the same question, but one produces research notes while the other produces risk checks.
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function researchAgent(state: typeof GraphState.State) {
const response = await llm.invoke([
new HumanMessage(
`List 3 concise research points for this question:\n${state.question}`
),
]);
return { research: [response.content.toString()] };
}
async function riskAgent(state: typeof GraphState.State) {
const response = await llm.invoke([
new HumanMessage(
`List 3 concise implementation risks for this question:\n${state.question}`
),
]);
return { risks: [response.content.toString()] };
}
- •Add a join node that waits for both branches to finish and then combines their output. This is where parallel work becomes useful because you get one final answer from multiple specialists.
async function summarizeAgent(state: typeof GraphState.State) {
const prompt = `
Question: ${state.question}
Research:
${state.research.join("\n")}
Risks:
${state.risks.join("\n")}
Write a short final recommendation.
`;
const response = await llm.invoke([new HumanMessage(prompt)]);
return { summary: response.content.toString() };
}
- •Wire the graph so the two agents run from
STARTin parallel and both feed into the summarizer. In LangGraph, multiple outgoing edges from the same node are enough to fan out work.
const graph = new StateGraph(GraphState)
.addNode("researchAgent", researchAgent)
.addNode("riskAgent", riskAgent)
.addNode("summarizeAgent", summarizeAgent)
.addEdge(START, "researchAgent")
.addEdge(START, "riskAgent")
.addEdge("researchAgent", "summarizeAgent")
.addEdge("riskAgent", "summarizeAgent")
.addEdge("summarizeAgent", END);
const app = graph.compile();
- •Run it with a real question and inspect the merged output. If both branches execute correctly, you should see one summary built from two independent agent results.
const result = await app.invoke({
question: "Should we add real-time fraud scoring to card payments?",
});
console.log("Research:", result.research);
console.log("Risks:", result.risks);
console.log("Summary:", result.summary);
Testing It
Run the file with npx tsx your-file.ts after setting OPENAI_API_KEY. If the graph is wired correctly, both parallel nodes should complete before the summarizer runs.
A good sanity check is to log timestamps inside each node and confirm they start independently. You should also verify that both research and risks arrays contain data before summary is produced.
If you only see one branch contributing, your reducer is usually the problem. Parallel writes need reducers; otherwise one branch will overwrite the other.
Next Steps
- •Add more branches for compliance review, customer impact, or cost estimation
- •Replace direct LLM calls with tool-using agents for retrieval and database lookups
- •Learn conditional routing so branches only run when needed
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit