CrewAI Tutorial (TypeScript): adding observability for beginners
This tutorial shows you how to add observability to a CrewAI workflow in TypeScript so you can see what your agents are doing, how long tasks take, and where failures happen. You need this when a crew works locally but becomes hard to debug in staging or production.
What You'll Need
- •Node.js 18+
- •A TypeScript project with CrewAI installed
- •An OpenAI API key
- •A Langfuse account and API key for tracing
- •
dotenvfor loading environment variables - •Basic CrewAI knowledge: agents, tasks, and crews
Install the packages:
npm install @crewai/core @crewai/langfuse dotenv
npm install -D typescript tsx @types/node
Create a .env file:
OPENAI_API_KEY=your_openai_key
LANGFUSE_PUBLIC_KEY=your_langfuse_public_key
LANGFUSE_SECRET_KEY=your_langfuse_secret_key
LANGFUSE_HOST=https://cloud.langfuse.com
Step-by-Step
- •Create a small CrewAI project structure. Keep the setup simple so you can confirm tracing before adding more agents or tools.
// src/index.ts
import "dotenv/config";
import { Agent, Task, Crew } from "@crewai/core";
import { LangfuseTracer } from "@crewai/langfuse";
const tracer = new LangfuseTracer({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_HOST,
});
const agent = new Agent({
name: "Support Analyst",
role: "Customer support analyst",
goal: "Summarize customer issues clearly",
backstory: "You are precise and concise.",
});
const task = new Task({
description: "Summarize this issue: The customer cannot reset their password.",
expectedOutput: "A short support summary",
agent,
});
- •Attach the tracer to the crew. This is the part that sends spans and task metadata to Langfuse so you can inspect execution later.
const crew = new Crew({
agents: [agent],
tasks: [task],
verbose: true,
callbacks: [tracer],
});
async function main() {
const result = await crew.kickoff();
console.log("\nFinal result:\n", result);
}
main().catch((error) => {
console.error("Crew execution failed:", error);
process.exit(1);
});
- •Add a second task so you can see how observability helps with multi-step flows. In real projects, this is where tracing becomes useful because you can follow the chain of work across tasks.
import { Agent, Task, Crew } from "@crewai/core";
import { LangfuseTracer } from "@crewai/langfuse";
import "dotenv/config";
const tracer = new LangfuseTracer({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
});
const analyst = new Agent({
name: "Support Analyst",
role: "Customer support analyst",
goal: "Summarize customer issues clearly",
});
const reviewer = new Agent({
name: "QA Reviewer",
role: "Quality reviewer",
goal: "Check whether summaries are complete and accurate",
});
const summarize = new Task({
description: "Summarize this issue: The customer cannot reset their password.",
expectedOutput: "A concise summary",
agent: analyst,
});
const review = new Task({
description: "Review the summary for completeness and clarity.",
expectedOutput: "A review note with any gaps",
agent: reviewer,
});
- •Run the crew with both tasks and inspect the trace output in Langfuse. You should see one trace per run and nested events for each task if the callback is wired correctly.
const crew = new Crew({
agents: [analyst, reviewer],
tasks: [summarize, review],
verbose: true,
callbacks: [tracer],
});
async function main() {
const result = await crew.kickoff();
console.log(result);
}
main().catch((error) => {
console.error(error);
});
- •Add basic context propagation by passing outputs between tasks. This gives you better visibility into what each step consumed and produced, which matters when debugging bad downstream outputs.
const summarizeTask = new Task({
description:
"Summarize this issue for support handoff: The customer cannot reset their password.",
expectedOutput: "A short handoff summary",
agent,
});
const triageTask = new Task({
description:
"Use the previous summary to decide whether this is an account access issue or a bug.",
expectedOutput:
"A classification with one sentence justification",
});
const crewWithContext = new Crew({
agents: [agent],
tasks: [summarizeTask, triageTask],
verbose: true,
callbacks: [tracer],
});
Testing It
Run the file with tsx:
npx tsx src/index.ts
If observability is working, your terminal should show normal CrewAI execution logs and your Langfuse dashboard should receive a trace shortly after the run completes. If nothing appears in Langfuse, check that your environment variables are loaded and that your network can reach the Langfuse host.
A good test is to intentionally break one task prompt and rerun it. You should be able to compare traces across runs and see exactly where output quality changed.
Next Steps
- •Add tool calls to your agents and trace those tool executions too.
- •Learn how to attach custom metadata like tenant ID, environment, or workflow name.
- •Set up alerting on failed runs or slow traces so production issues surface early.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit