LangChain Tutorial (TypeScript): running agents in parallel for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainrunning-agents-in-parallel-for-beginnerstypescript

This tutorial shows how to run multiple LangChain agents in parallel with TypeScript, then collect their outputs into one result. You’d use this when one agent handles research, another drafts a response, and a third validates the answer, all at the same time instead of waiting on each other.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key
  • Packages:
    • langchain
    • @langchain/openai
    • @langchain/core
    • zod
    • ts-node or a TypeScript build setup
  • A .env file with:
    • OPENAI_API_KEY=your_key_here

Install everything:

npm install langchain @langchain/openai @langchain/core zod
npm install -D typescript ts-node @types/node

Step-by-Step

  1. Create a small project setup and load your API key. Keep this simple: the goal is to run two agents side by side, not build a full app shell.
import "dotenv/config";

if (!process.env.OPENAI_API_KEY) {
  throw new Error("OPENAI_API_KEY is missing");
}

console.log("API key loaded");
  1. Build two separate agents with different jobs. One agent will act as a concise researcher, and the other will act as a risk reviewer.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createAgent } from "langchain/agents";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const researcher = createAgent({
  llm: model,
  prompt: ChatPromptTemplate.fromMessages([
    ["system", "You are a concise research assistant. Answer in 3 bullets."],
    ["human", "{input}"],
  ]),
});

const reviewer = createAgent({
  llm: model,
  prompt: ChatPromptTemplate.fromMessages([
    ["system", "You are a strict reviewer. Point out risks and missing details."],
    ["human", "{input}"],
  ]),
});
  1. Run both agents at the same time with Promise.all. This is the core pattern: start both calls immediately, wait for both to finish, then merge the results.
async function runParallelAgents(input: string) {
  const [researchResult, reviewResult] = await Promise.all([
    researcher.invoke({ input }),
    reviewer.invoke({ input }),
  ]);

  return {
    research: researchResult.output,
    review: reviewResult.output,
  };
}
  1. Add an orchestrator that formats the final output. In production, this is where you would combine results into one response for your app or workflow engine.
async function main() {
  const topic = "Explain why banks use parallel AI agents for loan application triage.";

  const result = await runParallelAgents(topic);

  console.log("\n--- Research Agent ---\n");
  console.log(result.research);

  console.log("\n--- Review Agent ---\n");
  console.log(result.review);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. Put it all together in one file and run it. Save this as parallel-agents.ts, then execute it with ts-node.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createAgent } from "langchain/agents";

if (!process.env.OPENAI_API_KEY) {
  throw new Error("OPENAI_API_KEY is missing");
}

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const researcher = createAgent({
  llm: model,
  prompt: ChatPromptTemplate.fromMessages([
    ["system", "You are a concise research assistant. Answer in 3 bullets."],
    ["human", "{input}"],
  ]),
});

const reviewer = createAgent({
  llm: model,
  prompt: ChatPromptTemplate.fromMessages([
    ["system", "You are a strict reviewer. Point out risks and missing details."],
    ["human", "{input}"],
  ]),
});

async function runParallelAgents(input: string) {
  const [researchResult, reviewResult] = await Promise.all([
    researcher.invoke({ input }),
    reviewer.invoke({ input }),
  ]);

  return {
    research: researchResult.output,
    review: reviewResult.output,
  };
}

async function main() {
  const topic = "Explain why banks use parallel AI agents for loan application triage.";
  const result = await runParallelAgents(topic);

ലconsole.log("\n--- Research Agent ---\n");
console.log(result.research);

console.log("\n--- Review Agent ---\n");
console.log(result.review);
}

main().catch((error) => {
	console.error(error);
	process.exit(1);
});

Testing It

Run the script and confirm you get two separate outputs back in one execution. The important thing is that both agents should respond to the same input without one waiting for the other to finish first.

If you want to verify parallelism more clearly, add timestamps before and after each invoke() call and compare them against a sequential version using two await statements. In practice, parallel execution should reduce total wall-clock time when both calls are independent.

Also check that failures are handled cleanly. If one agent throws an error, Promise.all will reject immediately, which is usually what you want for strict workflows; if you need partial success, switch to Promise.allSettled.

Next Steps

  • Add a third agent that merges or adjudicates the two outputs into a final answer.
  • Replace simple prompts with tools so each agent can call APIs or internal services.
  • Learn Promise.allSettled and timeout wrappers for resilient multi-agent orchestration.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides