CrewAI Tutorial (TypeScript): streaming agent responses for beginners

By Cyprian AaronsUpdated 2026-04-21
crewaistreaming-agent-responses-for-beginnerstypescript

This tutorial shows you how to stream a CrewAI agent’s response token-by-token in a TypeScript app, instead of waiting for the full answer at the end. You need this when you want live UI feedback, better perceived latency, or to pipe agent output into logs, terminals, or chat interfaces as it’s generated.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project already set up
  • CrewAI installed in your project
  • An OpenAI API key
  • A terminal that can run TypeScript scripts
  • Basic familiarity with Agent and Task in CrewAI

Install the package you need:

npm install @crewai/crewai dotenv
npm install -D typescript tsx @types/node

Create a .env file with your key:

OPENAI_API_KEY=your_openai_api_key_here

Step-by-Step

  1. Start by creating a small TypeScript script that loads environment variables and imports the CrewAI primitives. For streaming, the important part is to configure the agent so the underlying LLM can emit partial output while the task is running.
import "dotenv/config";
import { Agent, Task, Crew } from "@crewai/crewai";

if (!process.env.OPENAI_API_KEY) {
  throw new Error("Missing OPENAI_API_KEY in .env");
}

const agent = new Agent({
  role: "Support Analyst",
  goal: "Answer customer questions clearly and concisely",
  backstory: "You help users understand product behavior and troubleshooting steps.",
  llm: "gpt-4o-mini",
});
  1. Next, define a task that asks for a short response. Keep the prompt focused so it’s easy to see streaming behavior in the terminal without waiting on a long generation.
const task = new Task({
  description:
    "Explain in plain English how streaming responses help user experience in an AI chat app.",
  expectedOutput: "A short explanation with practical benefits.",
  agent,
});

const crew = new Crew({
  agents: [agent],
  tasks: [task],
});
  1. Now wire up streaming output. In TypeScript, the simplest beginner-friendly pattern is to subscribe to streamed chunks from the run result and print them immediately as they arrive.
async function main() {
  const result = await crew.kickoff();

  if (typeof result === "string") {
    console.log(result);
    return;
  }

  if ("output" in result && typeof result.output === "string") {
    console.log(result.output);
    return;
  }

  console.log(String(result));
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. If you want real terminal-style streaming, use an async iterator when your CrewAI version exposes streamed events. This pattern prints chunks as they arrive and still works cleanly with standard Node.js execution.
async function streamRun() {
  const stream = await crew.kickoff({ stream: true });

  if (Symbol.asyncIterator in Object(stream)) {
    for await (const chunk of stream as AsyncIterable<{ content?: string }>) {
      if (chunk.content) process.stdout.write(chunk.content);
    }
    process.stdout.write("\n");
    return;
  }

  console.log(String(stream));
}

streamRun().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. Put it together in one file so you can run it directly. This version keeps both non-streaming and streaming-safe handling in place, which matters because SDK behavior can differ slightly across versions.
import "dotenv/config";
import { Agent, Task, Crew } from "@crewai/crewai";

if (!process.env.OPENAI_API_KEY) {
  throw new Error("Missing OPENAI_API_KEY in .env");
}

const agent = new Agent({
  role: "Support Analyst",
  goal: "Answer customer questions clearly and concisely",
  backstory: "You help users understand product behavior and troubleshooting steps.",
  llm: "gpt-4o-mini",
});

const task = new Task({
  description:
    "Explain in plain English how streaming responses help user experience in an AI chat app.",
  expectedOutput: "A short explanation with practical benefits.",
  agent,
});

const crew = new Crew({
  agents: [agent],
  tasks: [task],
});

async function main() {
  const stream = await crew.kickoff({ stream: true });

  if (Symbol.asyncIterator in Object(stream)) {
    for await (const chunk of stream as AsyncIterable<{ content?: string }>) {
      if (chunk.content) process.stdout.write(chunk.content);
    }
    process.stdout.write("\n");
    return;
  }

  console.log(String(stream));
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});

Testing It

Run the script with tsx so TypeScript executes without a separate build step:

npx tsx src/index.ts

If streaming is working, you should see output appear gradually instead of all at once. If nothing streams and you only get a final block of text, check whether your installed CrewAI version supports kickoff({ stream: true }) and whether your model/provider supports streamed completions.

Also verify that .env is loaded correctly by temporarily logging process.env.OPENAI_API_KEY?.slice(0, 8). If the script fails immediately, it’s usually one of three things: missing API key, wrong package version, or a model name that your account cannot access.

Next Steps

  • Add an event handler that forwards streamed chunks into a web socket or SSE endpoint.
  • Wrap this into an Express or Fastify route so a frontend can render tokens live.
  • Move from one agent to multi-agent crews and stream intermediate reasoning outputs separately from final answers.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides