LangChain Tutorial (TypeScript): running agents in parallel for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainrunning-agents-in-parallel-for-intermediate-developerstypescript

This tutorial shows you how to run multiple LangChain agents in parallel with TypeScript, collect their outputs, and merge them into one result. You need this when one agent is not enough: for example, when you want separate agents to analyze risk, summarize a customer record, and draft a response at the same time.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • langchain package
  • @langchain/openai package
  • OpenAI API key set as OPENAI_API_KEY
  • A project with "type": "module" or compatible ESM setup
  • Basic familiarity with LangChain chat models and tools

Step-by-Step

  1. Start by installing the packages and setting up your environment. This example uses OpenAI chat models through LangChain, so make sure your key is available before running anything.
npm install langchain @langchain/openai
npm install -D typescript tsx @types/node
  1. Create a small agent factory. Each agent gets its own role prompt, but they all use the same model and tool set. For this tutorial, we keep the tools simple so the parallelism pattern is clear.
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, HumanMessage } from "@langchain/core/messages";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function runAgent(role: string, input: string) {
  const messages = [
    new HumanMessage(
      `You are a ${role}. Analyze the following request and respond in one short paragraph:\n\n${input}`
    ),
  ];

  const result = await model.invoke(messages);
  return result.content.toString();
}
  1. Run the agents in parallel with Promise.all. This is the core pattern: each task starts at the same time, and you wait for all of them before combining results.
async function main() {
  const request =
    "Customer reports repeated card declines after a hotel booking in another country.";

  const tasks = [
    runAgent("fraud analyst", request),
    runAgent("customer support specialist", request),
    runAgent("chargeback operations specialist", request),
  ];

  const [fraud, support, chargeback] = await Promise.all(tasks);

  console.log("=== Fraud Analyst ===");
  console.log(fraud);
  console.log("\n=== Customer Support ===");
  console.log(support);
  console.log("\n=== Chargeback Ops ===");
  console.log(chargeback);
}

main().catch(console.error);
  1. Add a merger step that turns those parallel outputs into one usable answer. In production, this is where you would format a final response for a CRM note, case summary, or next-action recommendation.
async function mergeResults(
  fraud: string,
  support: string,
  chargeback: string
) {
  const prompt = `
Combine these three specialist opinions into one concise action plan:

Fraud Analyst:
${fraud}

Customer Support:
${support}

Chargeback Ops:
${chargeback}

Return:
1. Priority assessment
2. Recommended next action
3. One customer-facing sentence
`;

  const result = await model.invoke([new HumanMessage(prompt)]);
  return result.content.toString();
}
  1. Wire everything together and run it end to end. This version prints both the parallel outputs and the final merged decision.
async function main() {
  const request =
    "Customer reports repeated card declines after a hotel booking in another country.";

  const [fraud, support, chargeback] = await Promise.all([
    runAgent("fraud analyst", request),
    runAgent("customer support specialist", request),
    runAgent("chargeback operations specialist", request),
  ]);

  const finalAnswer = await mergeResults(fraud, support, chargeback);

  console.log(finalAnswer);
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});

Testing It

Run the file with npx tsx your-file.ts. You should see three agent responses come back independently, followed by one merged summary.

If one agent is slow, the total runtime should still be close to the slowest single call rather than the sum of all calls. That’s the main signal that your tasks are truly running in parallel.

To test failure handling, temporarily break one prompt or point one agent at invalid input. With Promise.all, a single rejection will fail the whole batch; that is often what you want for strict workflows.

If you need partial success instead, switch to Promise.allSettled and handle each result separately.

Next Steps

  • Add real tools per agent using LangChain tool calling instead of plain prompts.
  • Replace Promise.all with Promise.allSettled and build partial-failure handling.
  • Wrap this pattern in an orchestrator that routes cases based on risk level or product line.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides