CrewAI Tutorial (Python): adding observability for beginners
This tutorial shows you how to add observability to a CrewAI Python project so you can see what your agents are doing, how long tasks take, and where failures happen. You need this when a crew works locally but becomes hard to debug once prompts, tools, and multi-step task chains start interacting.
What You'll Need
- •Python 3.10 or newer
- •A CrewAI project installed with:
- •
crewai - •
python-dotenv
- •
- •An API key for the LLM provider you use, for example:
- •
OPENAI_API_KEY
- •
- •A Langfuse account if you want production-style tracing
- •Optional but useful:
- •
langfuse - •
crewai[tools]if you plan to use built-in tools later
- •
Step-by-Step
- •Start with a minimal CrewAI project and make sure it runs before adding observability. If the base crew is broken, tracing just gives you more logs about a broken setup.
pip install crewai python-dotenv langfuse
# main.py
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
load_dotenv()
researcher = Agent(
role="Researcher",
goal="Find concise facts about CrewAI observability",
backstory="You are careful and structured.",
verbose=True,
)
task = Task(
description="Explain why observability matters in agent workflows.",
expected_output="A short explanation with practical reasons.",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print(result)
- •Add environment variables for your model and tracing backend. For beginners, keep secrets out of code and put them in a
.envfile.
# .env
OPENAI_API_KEY=your_openai_key_here
# Langfuse tracing
LANGFUSE_PUBLIC_KEY=pk_your_public_key_here
LANGFUSE_SECRET_KEY=sk_your_secret_key_here
LANGFUSE_HOST=https://cloud.langfuse.com
- •Enable Langfuse callbacks in your CrewAI runtime. This gives you spans for the crew run, individual tasks, and model calls so you can inspect failures instead of guessing.
# main.py
from dotenv import load_dotenv
from langfuse.callback import CallbackHandler
from crewai import Agent, Task, Crew, Process
load_dotenv()
callback_handler = CallbackHandler()
researcher = Agent(
role="Researcher",
goal="Find concise facts about CrewAI observability",
backstory="You are careful and structured.",
verbose=True,
)
task = Task(
description="Explain why observability matters in agent workflows.",
expected_output="A short explanation with practical reasons.",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
process=Process.sequential,
verbose=True,
callbacks=[callback_handler],
)
result = crew.kickoff()
print(result)
- •If your version of CrewAI supports per-agent or per-task callbacks better than crew-wide callbacks, attach them there too. This is useful when you want traces split by agent responsibility instead of one big run.
from dotenv import load_dotenv
from langfuse.callback import CallbackHandler
from crewai import Agent, Task, Crew, Process
load_dotenv()
callback_handler = CallbackHandler()
researcher = Agent(
role="Researcher",
goal="Find concise facts about CrewAI observability",
backstory="You are careful and structured.",
verbose=True,
)
task = Task(
description="List three benefits of observability in AI agents.",
expected_output="Three bullet points with practical benefits.",
agent=researcher,
)
# Keep the same callback handler available at the workflow level.
crew = Crew(
agents=[researcher],
tasks=[task],
process=Process.sequential,
verbose=True,
)
- •Run the script and inspect both local output and your Langfuse dashboard. The local terminal should show the agent activity; the dashboard should show trace timing, model usage, and task execution details.
python main.py
Testing It
Run the script once with valid API keys and confirm that the task completes without errors. In the terminal, you should see verbose agent output plus the final result printed at the end.
Then open Langfuse and look for a new trace from your run. You want to verify that at least one span exists for the execution and that prompt/response metadata is attached.
If nothing shows up in Langfuse, check these first:
- •
.envis being loaded byload_dotenv() - •Your API keys are valid
- •The callback handler package version matches your installed CrewAI version
If the run fails before any trace appears, fix the model call first. Observability only helps after the underlying request can actually complete.
Next Steps
- •Add tool calls to your agents so you can trace external API usage as well as LLM calls.
- •Move from one-agent examples to multi-agent crews and compare traces across roles.
- •Add structured logging alongside traces so production debugging covers both execution flow and observability data.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit