CrewAI Tutorial (Python): adding human-in-the-loop for intermediate developers
This tutorial shows how to pause a CrewAI workflow, ask a human for approval or edits, and then continue execution with the human’s decision. You need this when an agent is about to send a risky email, approve a claim, or finalize a customer-facing response that should not be fully autonomous.
What You'll Need
- •Python 3.10+
- •
crewai - •
crewai-toolsif you want extra tools later - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with
Agent,Task, andCrew - •A terminal where you can run Python scripts interactively
Install the package:
pip install crewai
Step-by-Step
- •Start with a normal CrewAI setup.
We’ll create one agent that drafts a customer response and one task that produces something we can review before sending.
import os
from crewai import Agent, Task, Crew, Process
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "")
writer = Agent(
role="Customer Support Writer",
goal="Draft accurate, concise customer support replies",
backstory="You write support messages for a regulated financial services team.",
verbose=True,
)
draft_task = Task(
description=(
"Draft a reply to this customer complaint: "
"'My payment was charged twice and I need this fixed today.'"
),
expected_output="A short support reply suitable for human review.",
agent=writer,
)
- •Run the task and stop before any external action happens.
In production, this is where you insert your approval gate: the model drafts content, but a person must review it before it gets sent or committed.
crew = Crew(
agents=[writer],
tasks=[draft_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
draft_text = str(result)
print("\n--- DRAFT ---\n")
print(draft_text)
- •Add the human-in-the-loop checkpoint.
This version asks the operator to approve, reject, or edit the draft. The important part is that the workflow does not continue until the human makes a decision.
def get_human_review(text: str) -> str:
print("\n--- HUMAN REVIEW REQUIRED ---\n")
print(text)
print("\nOptions: [a]pprove, [e]dit, [r]eject")
choice = input("Decision: ").strip().lower()
if choice == "a":
return text
if choice == "e":
edited = input("Paste edited version:\n")
return edited.strip()
raise ValueError("Draft rejected by human reviewer.")
approved_text = get_human_review(draft_text)
print("\n--- FINAL APPROVED TEXT ---\n")
print(approved_text)
- •Continue the workflow with the approved content.
Now we hand the human-approved text to another task that formats it for sending. This keeps generation and execution separated, which is the right pattern for regulated workflows.
sender = Agent(
role="Message Formatter",
goal="Turn approved text into a final send-ready message",
backstory="You prepare final customer messages after compliance review.",
)
send_task = Task(
description=f"Format this approved message for sending:\n\n{approved_text}",
expected_output="A final polished message ready to send.",
agent=sender,
)
send_crew = Crew(
agents=[sender],
tasks=[send_task],
process=Process.sequential,
)
final_result = send_crew.kickoff()
print("\n--- SEND-READY MESSAGE ---\n")
print(final_result)
- •Wrap it into one executable script.
For real use, keep the approval gate in its own function and log both the draft and the human decision so you have an audit trail.
def main():
crew = Crew(
agents=[writer],
tasks=[draft_task],
process=Process.sequential,
verbose=True,
)
draft_text = str(crew.kickoff())
approved_text = get_human_review(draft_text)
send_crew = Crew(
agents=[sender],
tasks=[
Task(
description=f"Format this approved message for sending:\n\n{approved_text}",
expected_output="A final polished message ready to send.",
agent=sender,
)
],
process=Process.sequential,
)
print(send_crew.kickoff())
if __name__ == "__main__":
main()
Testing It
Run the script from your terminal and confirm it pauses after generating the draft. If you choose a, it should continue with the original text; if you choose e, it should accept your edited version; if you choose r, it should raise an error and stop.
Test with a few different prompts: one simple support reply, one compliance-sensitive response, and one intentionally bad draft so you can verify rejection works. Also check that your logs capture both the generated draft and the final approved version.
If you want stronger validation, add automated checks before prompting the human, such as regex rules for prohibited phrases or length limits. That gives you a two-layer control flow: machine guardrails first, human approval second.
Next Steps
- •Add structured output with Pydantic so humans review fields instead of raw text.
- •Store approvals in a database table with timestamps and reviewer IDs.
- •Move from terminal input to a FastAPI or Streamlit approval UI for real operators.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit