LangChain Tutorial (Python): adding human-in-the-loop for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-human-in-the-loop-for-advanced-developerspython

This tutorial shows how to add a human approval gate into a LangChain Python workflow so risky actions can be reviewed before execution. You need this when your agent can send emails, approve transactions, or call internal tools where an automatic mistake is expensive.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • pydantic
  • OpenAI API key set as OPENAI_API_KEY
  • A terminal and a Python virtual environment
  • Basic familiarity with LangChain chat models and tools

Install the packages:

pip install langchain langchain-openai pydantic

Step-by-Step

  1. Start with a simple agent that can propose an action, but do not let it execute anything yet. The key pattern is to separate “decision” from “execution” so you can insert review in between.
import os
from typing import Literal
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI

class ActionPlan(BaseModel):
    action: Literal["refund", "escalate", "deny"] = Field(...)
    reason: str = Field(...)

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
planner = llm.with_structured_output(ActionPlan)

plan = planner.invoke(
    "Customer claims a duplicate charge of $120 on their card. "
    "Choose the safest next step."
)

print(plan)
  1. Add a human-in-the-loop checkpoint that pauses for approval before the action runs. In production, this prompt would usually be replaced by a UI, ticketing system, or workflow engine, but input() is enough to prove the control flow.
def require_human_approval(plan: ActionPlan) -> bool:
    print("\nProposed action:")
    print(f"Action: {plan.action}")
    print(f"Reason: {plan.reason}")
    response = input("Approve? (yes/no): ").strip().lower()
    return response in {"yes", "y"}

approved = require_human_approval(plan)

if approved:
    print("Human approved the plan.")
else:
    print("Human rejected the plan.")
  1. Wrap the approval gate around the actual side effect. This is the part most people get wrong: the LLM should never directly perform the business action; your application should do it after approval.
def execute_action(plan: ActionPlan) -> str:
    if plan.action == "refund":
        return "Refund executed in payment system."
    if plan.action == "escalate":
        return "Case escalated to support queue."
    return "Request denied and logged."

if approved:
    result = execute_action(plan)
else:
    result = "No action taken."

print(result)
  1. Put the whole flow behind one function so you can reuse it across agents and tools. This makes it easier to add logging, audit trails, and policy checks later without rewriting every chain.
def process_customer_request(message: str) -> str:
    plan = planner.invoke(message)
    print("\nLLM proposal:", plan.model_dump())

    if not require_human_approval(plan):
        return "Rejected by human reviewer."

    return execute_action(plan)

output = process_customer_request(
    "A customer says they were billed twice for order #88421 and wants it fixed."
)
print("\nFinal output:", output)
  1. If you want a stricter control point, validate whether certain actions always require review while low-risk actions can pass automatically. That gives you policy-based routing instead of forcing humans into every request.
HIGH_RISK_ACTIONS = {"refund"}

def needs_review(plan: ActionPlan) -> bool:
    return plan.action in HIGH_RISK_ACTIONS

plan = planner.invoke("Customer requests a refund for duplicate billing.")

if needs_review(plan):
    approved = require_human_approval(plan)
else:
    approved = True

print("Approved:", approved)

Testing It

Run the script with a few different prompts and confirm that the model returns a structured ActionPlan every time. Then answer no at the approval prompt and verify that no execution path runs.

Next, answer yes and confirm that only the approved branch reaches execute_action(). If you want stronger validation, add logging around both the proposed plan and final decision so you can trace who approved what and when.

Next Steps

  • Replace input() with a real reviewer workflow using FastAPI, Slack, or an internal admin panel.
  • Add LangGraph state management so approvals become part of a durable agent workflow.
  • Store every proposed action and reviewer decision in an audit table for compliance reporting.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides