AutoGen Tutorial (TypeScript): streaming agent responses for beginners

By Cyprian AaronsUpdated 2026-04-21
autogenstreaming-agent-responses-for-beginnerstypescript

This tutorial shows you how to wire up AutoGen in TypeScript so an agent can stream partial responses back to your app instead of waiting for the full completion. You need this when you want a better UX in chat apps, dashboards, or internal tools where users should see the model “typing” as it works.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project
  • An OpenAI API key
  • AutoGen for JavaScript/TypeScript installed via npm
  • A terminal and a code editor
  • Basic familiarity with AssistantAgent and UserProxyAgent

Step-by-Step

  1. Create a new TypeScript project and install the packages you need. I’m using the official AutoGen packages plus dotenv so the API key stays out of source control.
mkdir autogen-streaming-demo
cd autogen-streaming-demo
npm init -y
npm install @autogen/core @autogen/openai dotenv
npm install -D typescript tsx @types/node
npx tsc --init
  1. Add your OpenAI API key to an environment file. Keep this simple: one variable, loaded at runtime.
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_api_key_here
EOF
  1. Create a small TypeScript entry point that sets up an assistant agent with streaming enabled. The important part is using runStream() instead of run(), then consuming the stream events as they arrive.
import "dotenv/config";
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

async function main() {
  const modelClient = new OpenAIChatCompletionClient({
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
  });

  const agent = new AssistantAgent({
    name: "assistant",
    modelClient,
    systemMessage: "You are a concise assistant.",
  });

  const stream = await agent.runStream([
    { role: "user", content: "Explain streaming responses in one paragraph." },
  ]);

  for await (const event of stream) {
    if (event.type === "text") {
      process.stdout.write(event.text);
    }
  }

  process.stdout.write("\n");
}

main();
  1. If you want cleaner output, handle only text deltas and ignore non-text events. In real apps, you’ll usually separate token streaming from tool events, logs, or final messages.
import "dotenv/config";
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

async function main() {
  const client = new OpenAIChatCompletionClient({
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
  });

  const agent = new AssistantAgent({
    name: "assistant",
    modelClient: client,
  });

  const stream = await agent.runStream([
    { role: "user", content: "Give me three benefits of streaming AI responses." },
  ]);

  for await (const event of stream) {
    if (event.type === "text") {
      process.stdout.write(event.text);
    }
  }
}
main();
  1. Run the script with tsx. If everything is wired correctly, you should see text appear incrementally instead of all at once.
npx tsx index.ts

Testing It

The fastest test is to ask a longer prompt, not a one-liner. Streaming is easiest to spot when the model has enough work to produce multiple chunks, so try something like “Write a short comparison of polling vs streaming in AI apps.”

If nothing prints, check three things first: your .env file exists, OPENAI_API_KEY is set correctly, and your model name is valid for your account. Also confirm that you’re using runStream() and iterating over the returned async stream.

For a more realistic test, wrap the streamed output in a web server or CLI spinner later on. The core behavior you want here is partial text appearing before the full response finishes.

Next Steps

  • Add tool calling to streamed agents so you can show both reasoning progress and live output.
  • Wire the same pattern into an HTTP endpoint using Server-Sent Events.
  • Build a multi-agent setup where each agent streams into its own UI panel.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides