CrewAI Tutorial (TypeScript): streaming agent responses for intermediate developers
This tutorial shows you how to stream CrewAI agent responses in TypeScript instead of waiting for the full completion payload. You need this when you want live token-by-token output in a CLI, web app, or support workflow where users should see progress immediately.
What You'll Need
- •Node.js 18+ installed
- •A CrewAI account and API key
- •A supported LLM provider key, such as:
- •
OPENAI_API_KEY - •or the provider configured in your CrewAI setup
- •
- •A TypeScript project with:
- •
typescript - •
tsxorts-node - •
@crewai/crewai
- •
- •Basic familiarity with:
- •agents
- •tasks
- •crews
- •A terminal that can run environment variables
Step-by-Step
- •Start by creating a minimal TypeScript project and installing the CrewAI package plus a runner. I use
tsxhere because it executes TypeScript directly without extra build steps.
mkdir crewai-streaming-demo
cd crewai-streaming-demo
npm init -y
npm install @crewai/crewai dotenv
npm install -D typescript tsx @types/node
- •Add your environment variables and a basic TypeScript config. Keep secrets out of source control, and make sure your runtime can read
.envbefore you instantiate the crew.
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_key_here
CREWAI_API_KEY=your_crewai_key_here
EOF
cat > tsconfig.json << 'EOF'
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"resolveJsonModule": true,
"types": ["node"]
}
}
EOF
- •Create an agent, task, and crew with streaming enabled. The important part is that the crew run emits incremental output through callbacks instead of only returning a final string.
// src/index.ts
import 'dotenv/config';
import { Agent, Task, Crew } from '@crewai/crewai';
const agent = new Agent({
role: 'Banking assistant',
goal: 'Explain account activity clearly and concisely',
backstory: 'You help support teams summarize customer account events.',
});
const task = new Task({
description: 'Summarize the following transaction note in plain English: ACH debit from payroll processor.',
expectedOutput: 'A short customer-friendly explanation.',
agent,
});
const crew = new Crew({
agents: [agent],
tasks: [task],
});
- •Wire up streaming output using the callback hook. In practice, this is where you push tokens to stdout, a websocket, or your UI state store.
// src/index.ts continued
async function main() {
const result = await crew.kickoff({
stream: true,
callbacks: {
onTextDelta(delta: string) {
process.stdout.write(delta);
},
onEnd() {
process.stdout.write('\n');
},
onError(error: Error) {
console.error('\nStream error:', error.message);
},
},
});
console.log('\n\nFinal result:\n', result);
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
- •Run the script and watch the response arrive incrementally. If your provider supports streaming correctly, you should see text appear before the final result is printed.
npx tsx src/index.ts
- •If you want to use this in an API route, keep the same callback pattern and forward deltas to the client as they arrive. For example, in a Node server you would write each delta to an HTTP response or websocket channel.
import { createServer } from 'node:http';
createServer(async (_req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' });
await crew.kickoff({
stream: true,
callbacks: {
onTextDelta(delta: string) {
res.write(delta);
},
onEnd() {
res.end();
},
},
});
}).listen(3000);
Testing It
Run the script once with valid API keys and confirm that text appears progressively instead of only after completion. If nothing streams, check whether your model/provider actually supports streaming and whether your callback names match the package version you installed.
Also verify that errors are surfaced through onError and not swallowed silently. In production, I usually log both the raw deltas and the final assembled response so I can compare what users saw with what got persisted.
Next Steps
- •Add a second agent and test streaming across multi-step task execution.
- •Forward streamed deltas into a React UI or Next.js route handler.
- •Add structured logging so you can trace latency per token chunk and per task step.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit