CrewAI Tutorial (Python): adding human-in-the-loop for advanced developers
This tutorial shows how to wire human approval into a CrewAI workflow so an agent can pause before risky actions, wait for a developer or analyst to review the output, and then continue. You need this when the agent is generating customer-facing content, making policy decisions, or preparing actions that should not execute without a person in the loop.
What You'll Need
- •Python 3.10+
- •
crewai - •
python-dotenv - •An OpenAI API key set as
OPENAI_API_KEY - •A terminal with permission to run a local Python script
- •Basic familiarity with CrewAI agents, tasks, and crews
Install the packages:
pip install crewai python-dotenv
Step-by-Step
- •Start by creating a minimal crew that produces a draft recommendation. The key pattern here is to make the first task purely advisory so it can be reviewed before any downstream action happens.
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
load_dotenv()
analyst = Agent(
role="Risk Analyst",
goal="Draft a concise recommendation for review",
backstory="You assess operational risk for banking workflows.",
verbose=True,
)
draft_task = Task(
description=(
"Review this loan application summary and draft a recommendation:\n"
"Applicant has 2 late payments in the last 12 months, stable income, "
"and a debt-to-income ratio of 34%.\n"
"Return: APPROVE, REJECT, or ESCALATE with one paragraph justification."
),
expected_output="A single recommendation with justification.",
agent=analyst,
)
- •Add an explicit human gate between the draft and any action. In production, this is where you would send the output to Slack, a web UI, or an internal review tool; here we use
input()so the workflow is executable as-is.
def human_review(draft: str) -> str:
print("\n=== AGENT DRAFT ===")
print(draft)
print("\nType APPROVE to continue or REJECT to stop.")
decision = input("> ").strip().upper()
if decision != "APPROVE":
raise RuntimeError("Human rejected the draft.")
return draft
if __name__ == "__main__":
result = Crew(
agents=[analyst],
tasks=[draft_task],
process=Process.sequential,
verbose=True,
).kickoff()
approved_text = human_review(str(result))
print("\nApproved output:")
print(approved_text)
- •If you want true human-in-the-loop behavior inside a multi-step workflow, make later tasks depend on the reviewed output. This keeps the agent from moving forward until your gate returns approved content.
writer = Agent(
role="Policy Writer",
goal="Turn approved recommendations into a customer-safe response",
backstory="You write regulated communication for financial services.",
verbose=True,
)
response_task = Task(
description=(
"Using the approved recommendation below, write a customer-facing "
"decision notice in plain English.\n\nAPPROVED RECOMMENDATION:\n{approved_text}"
),
expected_output="A short customer-facing decision notice.",
agent=writer,
)
crew = Crew(
agents=[analyst, writer],
tasks=[draft_task, response_task],
process=Process.sequential,
verbose=True,
)
- •Pass only reviewed content into downstream work. In advanced setups, you would persist the approval record and reviewer identity; here we keep it simple but still enforce the control point.
if __name__ == "__main__":
draft_result = Crew(
agents=[analyst],
tasks=[draft_task],
process=Process.sequential,
verbose=True,
).kickoff()
approved_text = human_review(str(draft_result))
final_result = crew.kickoff(inputs={"approved_text": approved_text})
print("\n=== FINAL RESPONSE ===")
print(final_result)
- •For stricter control, wrap approval logic in a reusable function and validate both the content and the reviewer’s decision. This is the pattern you want when multiple crews share the same compliance gate.
def require_human_approval(text: str) -> str:
if not text or len(text.strip()) < 20:
raise ValueError("Draft is too short for review.")
print("\nReview required for:")
print(text)
reviewer = input("Reviewer name: ").strip()
decision = input("Approve? (yes/no): ").strip().lower()
if decision != "yes":
raise RuntimeError(f"Rejected by {reviewer}")
return text
approved_text = require_human_approval(str(draft_result))
print(f"\nApproved by human reviewer:\n{approved_text}")
Testing It
Run the script and let CrewAI generate the initial recommendation. When the prompt appears, type something other than APPROVE to confirm that execution stops immediately.
Then run it again and approve it so the second task can consume approved_text and produce the final customer-facing response. If you want to test failure handling, feed an empty string or reject at the gate and verify that your exception path prevents downstream execution.
In production terms, this gives you three things: deterministic stop points, auditable approval boundaries, and clean separation between agent reasoning and human authorization.
Next Steps
- •Replace
input()with a real review channel like Slack interactive buttons or an internal admin UI. - •Store approvals in Postgres with reviewer ID, timestamp, task ID, and prompt hash.
- •Add policy checks before approval so humans only review cases that exceed your risk threshold.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit