LangChain Tutorial (Python): adding human-in-the-loop for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-human-in-the-loop-for-intermediate-developerspython

This tutorial shows you how to pause an LLM workflow, inspect the model’s proposed action, and let a human approve or edit it before execution. You need this when the model is about to do something risky: send an email, write to a database, trigger a refund, or call an external API.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • pydantic
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with LangChain chat models and tools
  • A terminal and a virtual environment

Install the packages:

pip install langchain langchain-openai pydantic

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a simple tool that represents the risky action. In production this might be a CRM update or payment action; here we’ll use a harmless function so you can see the pattern clearly.
from langchain_core.tools import tool

@tool
def create_support_ticket(summary: str, priority: str) -> str:
    """Create a support ticket."""
    return f"Ticket created with priority={priority}: {summary}"
  1. Build a structured output schema for the model’s proposed action. This gives you a clean object to review before anything runs.
from typing import Literal
from pydantic import BaseModel, Field

class TicketDraft(BaseModel):
    summary: str = Field(..., description="Short description of the issue")
    priority: Literal["low", "medium", "high"] = Field(..., description="Ticket priority")
  1. Ask the model to produce a draft instead of executing the tool directly. The model returns structured data that your application can display to a human reviewer.
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
structured_llm = llm.with_structured_output(TicketDraft)

draft = structured_llm.invoke(
    "Customer reports repeated login failures after password reset."
)

print(draft)
print(draft.model_dump())
  1. Add the human-in-the-loop checkpoint. In a real app this could be an approval screen, Slack message, or admin dashboard; here we use terminal input so the flow is executable end-to-end.
def request_approval(draft: TicketDraft) -> TicketDraft | None:
    print("\nProposed action:")
    print(f"Summary : {draft.summary}")
    print(f"Priority: {draft.priority}")

    answer = input("\nApprove? (y/n/edit): ").strip().lower()

    if answer == "y":
        return draft

    if answer == "edit":
        summary = input("New summary: ").strip()
        priority = input("New priority (low/medium/high): ").strip()
        return TicketDraft(summary=summary, priority=priority)

    return None
  1. Execute only after approval. This is the part that matters in production: the LLM can suggest, but your application decides whether anything actually happens.
approved_draft = request_approval(draft)

if approved_draft is None:
    print("Action rejected by human reviewer.")
else:
    result = create_support_ticket.invoke(approved_draft.model_dump())
    print("\nFinal result:")
    print(result)
  1. Wrap it into one script so you can run the full workflow repeatedly. This version keeps the control flow explicit, which is what you want for auditability and debugging.
from typing import Literal
from pydantic import BaseModel, Field
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

@tool
def create_support_ticket(summary: str, priority: str) -> str:
    """Create a support ticket."""
    return f"Ticket created with priority={priority}: {summary}"

class TicketDraft(BaseModel):
    summary: str = Field(..., description="Short description of the issue")
    priority: Literal["low", "medium", "high"] = Field(...)

def request_approval(draft: TicketDraft) -> TicketDraft | None:
    print("\nProposed action:")
    print(draft.model_dump())
    answer = input("Approve? (y/n/edit): ").strip().lower()
    if answer == "y":
        return draft
    if answer == "edit":
        summary = input("New summary: ").strip()
        priority = input("New priority (low/medium/high): ").strip()
        return TicketDraft(summary=summary, priority=priority)
    return None

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
structured_llm = llm.with_structured_output(TicketDraft)

draft = structured_llm.invoke(
    "Customer reports repeated login failures after password reset."
)

approved_draft = request_approval(draft)
if approved_draft:
    print(create_support_ticket.invoke(approved_draft.model_dump()))
else:
    print("Rejected.")

Testing It

Run the script and confirm the model produces a draft before any tool call happens. Then try all three paths: approve, edit, and reject.

If you approve or edit, you should see the final tool output only after your input. If you reject, nothing should execute beyond printing the rejection message.

For extra confidence, add logging around each branch so you can trace exactly when human review happened. In regulated environments, that audit trail matters more than fancy agent behavior.

Next Steps

  • Move the approval step into Streamlit, FastAPI, or Slack so reviewers don’t need a terminal.
  • Add persistence for drafts and approvals so every decision is stored with timestamps and user IDs.
  • Combine this pattern with LangGraph when you need multi-step workflows with explicit approval nodes.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides