LlamaIndex Tutorial (Python): adding human-in-the-loop for beginners

By Cyprian AaronsUpdated 2026-04-21
llamaindexadding-human-in-the-loop-for-beginnerspython

This tutorial shows you how to add a human approval step into a LlamaIndex Python workflow before the agent takes an action. You need this when the model is about to do something risky, like sending an email, creating a ticket, or pulling sensitive data.

What You'll Need

  • Python 3.10+
  • llama-index
  • llama-index-llms-openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with QueryEngine, Tool, and LlamaIndex agent workflows
  • A terminal and a text editor

Install the packages first:

pip install llama-index llama-index-llms-openai

Step-by-Step

  1. Start with a small tool the agent can call. In real systems, this would be something that creates a case, sends a message, or updates a record.
from llama_index.core.tools import FunctionTool

def create_support_ticket(issue: str) -> str:
    return f"Ticket created for: {issue}"

ticket_tool = FunctionTool.from_defaults(
    fn=create_support_ticket,
    name="create_support_ticket",
    description="Create a support ticket for a user issue.",
)
  1. Set up the LLM and the agent. The important part here is that the agent can reason about whether it should use the tool, but we will still gate execution with human approval.
import os
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent

llm = OpenAI(model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])

agent = ReActAgent.from_tools(
    tools=[ticket_tool],
    llm=llm,
    verbose=True,
)
  1. Add a human-in-the-loop approval function. This is the control point: before any tool action runs, you inspect the proposed action and decide yes or no.
def require_human_approval(action_name: str, action_input: str) -> bool:
    print("\nHuman approval required")
    print(f"Action: {action_name}")
    print(f"Input: {action_input}")
    decision = input("Approve? (y/n): ").strip().lower()
    return decision == "y"
  1. Wire the approval check into your execution flow. Instead of letting the agent call tools blindly, you ask it for a response first, then only execute if the request passes review.
user_query = "Please create a ticket for my laptop not booting."

response = agent.chat(user_query)
print("\nAgent proposal:")
print(response)

if require_human_approval("create_support_ticket", user_query):
    result = create_support_ticket(user_query)
    print("\nApproved result:")
    print(result)
else:
    print("\nRequest rejected by human reviewer.")
  1. If you want tighter control, separate planning from execution. This pattern is better for production because you can inspect intent before any side effect happens.
from typing import Optional

def plan_then_execute(issue: str) -> Optional[str]:
    draft = f"Proposed ticket creation for issue: {issue}"
    print(draft)

    if not require_human_approval("create_support_ticket", issue):
        return None

    return create_support_ticket(issue)

result = plan_then_execute("VPN disconnects every 10 minutes")
print(result)
  1. For more realistic workflows, keep approvals around every sensitive tool call. That means you can allow safe read-only queries automatically, but force review on writes.
SAFE_TOOLS = {"search_docs"}

def should_require_approval(tool_name: str) -> bool:
    return tool_name not in SAFE_TOOLS

tool_name = "create_support_ticket"
tool_input = "User cannot access payroll portal"

if should_require_approval(tool_name):
    approved = require_human_approval(tool_name, tool_input)
else:
    approved = True

if approved:
    print(create_support_ticket(tool_input))
else:
    print("Skipped by reviewer.")

Testing It

Run the script and send it a request that would normally trigger tool use. You should see the proposed action printed before anything is executed.

Approve once with y and confirm that the tool output appears. Then reject with n and verify that no side effect happens.

If you are using this pattern inside an app, test both paths repeatedly. The failure mode you want to avoid is accidental execution before review.

Next Steps

  • Add structured approval records to your database so every human decision is auditable.
  • Move from manual input() to a queue-based review UI for real applications.
  • Learn LlamaIndex workflows so you can pause and resume multi-step agent execution cleanly.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides