LlamaIndex Tutorial (TypeScript): running agents in parallel for advanced developers
This tutorial shows you how to run multiple LlamaIndex agents in parallel from TypeScript, then merge their outputs into one result. You need this when a single agent becomes the bottleneck: multi-source research, parallel tool calls, or fan-out workflows where latency matters.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •An OpenAI API key set as
OPENAI_API_KEY - •Packages:
- •
llamaindex - •
zod - •
tsxfor running TypeScript directly
- •
- •A project with ESM enabled, or a modern TypeScript setup that can run
importsyntax
Step-by-Step
- •Install the dependencies and set up your environment.
This example uses LlamaIndex’s TypeScript SDK plus Zod for structured outputs.
npm init -y
npm install llamaindex zod
npm install -D typescript tsx @types/node
- •Create a small agent factory with a shared model and a simple tool.
The important part is that each agent is independent, so you can execute them concurrently without shared mutable state.
import { openai, FunctionTool, ReActAgentWorker, AgentRunner } from "llamaindex";
const llm = openai({ model: "gpt-4o-mini" });
const getTimestampTool = FunctionTool.from(
async () => new Date().toISOString(),
{
name: "get_timestamp",
description: "Returns the current UTC timestamp.",
}
);
function createAgent(systemPrompt: string) {
const worker = ReActAgentWorker.fromTools([getTimestampTool], {
llm,
systemPrompt,
});
return new AgentRunner({ worker });
}
- •Run several agents in parallel with
Promise.all.
Each agent gets a different prompt and returns its own answer. In production, this pattern is what keeps your orchestration layer fast when the work can be split cleanly.
async function runParallelAgents() {
const agents = [
createAgent("You are a compliance analyst. Be concise."),
createAgent("You are a risk analyst. Focus on operational risk."),
createAgent("You are a product analyst. Focus on user impact."),
];
const prompts = [
"Summarize why parallel agent execution helps in enterprise workflows.",
"Explain one risk of running multiple agents concurrently.",
"Describe one benefit for customer support automation.",
];
const results = await Promise.all(
agents.map((agent, index) =>
agent.chat({ message: prompts[index] }).then((response) => response.response)
)
);
return results;
}
- •Merge the outputs into one final synthesis step.
This is where parallel execution becomes useful: one model call gathers the independent answers and turns them into something actionable.
async function synthesizeResults(results: string[]) {
const synthesisAgent = createAgent(
"You are an enterprise architect. Combine inputs into a crisp final answer."
);
const message = `
Combine these three agent outputs into one executive summary:
1) ${results[0]}
2) ${results[1]}
3) ${results[2]}
Return:
- A single paragraph summary
- Three bullet points of practical takeaways
`;
const response = await synthesisAgent.chat({ message });
return response.response;
}
- •Put it together in a runnable script.
This file executes the fan-out/fan-in flow end to end. Save it asparallel-agents.tsand run it withnpx tsx parallel-agents.ts.
import { openai, FunctionTool, ReActAgentWorker, AgentRunner } from "llamaindex";
const llm = openai({ model: "gpt-4o-mini" });
const getTimestampTool = FunctionTool.from(
async () => new Date().toISOString(),
{
name: "get_timestamp",
description: "Returns the current UTC timestamp.",
}
);
function createAgent(systemPrompt: string) {
const worker = ReActAgentWorker.fromTools([getTimestampTool], {
llm,
systemPrompt,
});
return new AgentRunner({ worker });
}
async function runParallelAgents() {
const agents = [
createAgent("You are a compliance analyst. Be concise."),
createAgent("You are a risk analyst. Focus on operational risk."),
createAgent("You are a product analyst. Focus on user impact."),
];
const prompts = [
"Summarize why parallel agent execution helps in enterprise workflows.",
"Explain one risk of running multiple agents concurrently.",
"Describe one benefit for customer support automation.",
];
return Promise.all(
agents.map((agent, index) =>
agent.chat({ message: prompts[index] }).then((response) => response.response)
)
);
}
async function synthesizeResults(results: string[]) {
const synthesisAgent = createAgent(
"You are an enterprise architect. Combine inputs into a crisp final answer."
);
const message = `
Combine these three agent outputs into one executive summary:
1) ${results[0]}
2) ${results[1]}
3) ${results[2]}
Return:
- A single paragraph summary
- Three bullet points of practical takeaways
`;
const response = await synthesisAgent.chat({ message });
return response.response;
}
async function main() {
const results = await runParallelAgents();
console.log("Parallel results:\n", results);
const finalSummary = await synthesizeResults(results);
console.log("\nFinal summary:\n", finalSummary);
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
Testing It
Run the script with your OpenAI key exported in the environment:
export OPENAI_API_KEY="your-key"
npx tsx parallel-agents.ts
You should see three independent agent responses first, followed by one synthesized summary. If one agent is slow, the whole batch still waits only for the slowest call because Promise.all runs them concurrently.
To verify concurrency behavior, add timestamps before and after each .chat() call and compare total runtime against sequential execution. If you replace Promise.all with a plain for...of loop, latency will jump immediately.
If you want stronger validation, make each prompt ask for different facts and confirm that the final synthesis preserves all three perspectives instead of collapsing them into one generic answer.
Next Steps
- •Add per-agent tools so each worker can query different systems in parallel.
- •Replace the simple synthesis step with a router that decides whether to summarize, rank, or extract structured output.
- •Add retries and timeout handling around each
.chat()call before you ship this pattern to production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit