LangChain Tutorial (TypeScript): running agents in parallel for advanced developers
This tutorial shows how to run multiple LangChain agents in parallel from TypeScript, collect their outputs, and merge them into a single result. You need this when one agent is too slow, when you want specialized agents for different tasks, or when you want to compare independent reasoning paths before deciding what to do next.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •An OpenAI API key
- •These packages:
- •
langchain - •
@langchain/openai - •
zod - •
tsxor another TypeScript runner
- •
- •A project configured with ES modules
- •Basic familiarity with LangChain agents and tool calling
Step-by-Step
- •Start with a clean TypeScript setup and install the dependencies.
The example below uses the current LangChain package split, so keep your imports aligned with the package names.
npm init -y
npm install langchain @langchain/openai zod
npm install -D typescript tsx @types/node
- •Create a small toolset that each agent can use independently.
For parallel execution, the key is to make each agent responsible for one narrow job instead of giving all of them the same prompt and tools.
// tools.ts
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
export const summarizeRiskTool = new DynamicStructuredTool({
name: "summarize_risk",
description: "Summarize operational risk from a short scenario.",
schema: z.object({
scenario: z.string(),
}),
func: async ({ scenario }) => {
return `Risk summary: ${scenario.slice(0, 120)}`;
},
});
export const classifyPriorityTool = new DynamicStructuredTool({
name: "classify_priority",
description: "Classify incident priority as low, medium, or high.",
schema: z.object({
incident: z.string(),
}),
func: async ({ incident }) => {
const text = incident.toLowerCase();
if (text.includes("outage") || text.includes("payment failed")) return "high";
if (text.includes("delay") || text.includes("error")) return "medium";
return "low";
},
});
- •Build two agents with different prompts and run them at the same time using
Promise.all.
This is the main pattern: create isolated agents, invoke them concurrently, then combine their outputs in a deterministic way.
// parallel-agents.ts
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { HumanMessage } from "@langchain/core/messages";
import { summarizeRiskTool, classifyPriorityTool } from "./tools.js";
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
async function buildAgent(toolName: string, toolDescription: string) {
const prompt = ChatPromptTemplate.fromMessages([
["system", toolDescription],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const tool =
toolName === "summarize_risk" ? summarizeRiskTool : classifyPriorityTool;
const agent = await createOpenAIFunctionsAgent({
llm,
tools: [tool],
prompt,
});
return new AgentExecutor({
agent,
tools: [tool],
});
}
async function main() {
const riskAgent = await buildAgent(
"summarize_risk",
"You are a risk analyst. Use the tool once, then answer concisely."
);
const priorityAgent = await buildAgent(
"classify_priority",
"You are an incident triage agent. Use the tool once, then answer only with the priority."
);
const input = "Payment failed for enterprise customers during checkout.";
const [riskResult, priorityResult] = await Promise.all([
riskAgent.invoke({ input }),
priorityAgent.invoke({ input }),
]);
console.log("Risk:", riskResult.output);
console.log("Priority:", priorityResult.output);
}
main().catch(console.error);
- •Add a coordinator that merges both outputs into a final decision object.
In production systems, this is where you normalize agent responses into structured data so downstream services do not depend on free-form text.
// coordinator.ts
type ParallelResult = {
riskSummary: string;
priority: string;
};
export function mergeResults(riskSummary: string, priority: string): ParallelResult {
return {
riskSummary,
priority,
};
}
- •Wire everything together in one runnable file and execute it.
This version keeps the flow explicit so you can swap in more agents later without changing the orchestration pattern.
// index.ts
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { summarizeRiskTool, classifyPriorityTool } from "./tools.js";
import { mergeResults } from "./coordinator.js";
const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0 });
async function makeExecutor(tool: any, systemText: string) {
const prompt = ChatPromptTemplate.fromMessages([
["system", systemText],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = await createOpenAIFunctionsAgent({ llm, tools: [tool], prompt });
return new AgentExecutor({ agent, tools: [tool] });
}
async function main() {
const [riskAgent, priorityAgent] = await Promise.all([
makeExecutor(summarizeRiskTool, "Summarize operational risk using the tool."),
makeExecutor(classifyPriorityTool, "Classify incident priority using the tool."),
]);
const input = "Payment failed for enterprise customers during checkout.";
const [riskRes, priorityRes] = await Promise.all([
riskAgent.invoke({ input }),
priorityAgent.invoke({ input }),
]);
const merged = mergeResults(riskRes.output as string, priorityRes.output as string);
console.log(JSON.stringify(merged, null, 2));
}
main().catch(console.error);
Testing It
Run the script with your OpenAI key set in the environment:
export OPENAI_API_KEY="your-key"
npx tsx index.ts
You should see both agents complete independently and print a merged JSON object at the end. If one agent is slower than the other, total runtime should still be close to the slower branch rather than the sum of both branches. That is the point of parallel execution.
If you want to verify concurrency more explicitly, add timestamps before each invoke() call and after each response. You should see overlapping execution instead of strictly sequential logs.
Next Steps
- •Add a third agent for compliance review and merge all three results into one typed decision object.
- •Replace string outputs with Zod-validated structured responses so downstream systems can trust shape and types.
- •Move orchestration into a queue worker or API handler so parallel agents can run per request in production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit