LangGraph Tutorial (Python): adding human-in-the-loop for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-human-in-the-loop-for-intermediate-developerspython

This tutorial shows how to pause a LangGraph workflow, ask a human to review or edit the intermediate state, and then resume execution with that input. You need this when an agent reaches a risky decision point: approving a refund, sending an email, or selecting a tool action that should not run without review.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-core
  • An OpenAI-compatible chat model package if you want to extend this into an LLM-backed flow later
  • No API key is required for the example below because it uses plain Python state
  • Basic familiarity with:
    • StateGraph
    • nodes and edges
    • Command and interrupt

Install the core package:

pip install langgraph

Step-by-Step

  1. Start by defining a small state object and a graph with two nodes: one that prepares a decision, and one that applies it after human review. The key idea is that the graph pauses before the risky step and returns control to your application.
from typing import TypedDict, Optional

from langgraph.graph import StateGraph, START, END
from langgraph.types import interrupt, Command


class ReviewState(TypedDict):
    amount: int
    reason: str
    approved: Optional[bool]
    reviewer_note: Optional[str]


def prepare_request(state: ReviewState) -> ReviewState:
    return {
        **state,
        "approved": None,
        "reviewer_note": None,
    }
  1. Add an interrupt inside the review node. When this node runs, LangGraph stops execution and hands you the payload you pass to interrupt(), which your app can show in a UI, CLI prompt, or approval service.
def human_review(state: ReviewState) -> ReviewState:
    decision = interrupt({
        "amount": state["amount"],
        "reason": state["reason"],
        "message": "Approve this request?",
    })
    return {
        **state,
        "approved": decision["approved"],
        "reviewer_note": decision.get("note"),
    }


def apply_decision(state: ReviewState) -> ReviewState:
    if state["approved"]:
        print(f"Approved request for ${state['amount']}: {state['reason']}")
    else:
        print(f"Rejected request for ${state['amount']}: {state['reason']}")
    return state
  1. Build the graph and compile it with a checkpointer. Human-in-the-loop workflows need persistence because the process must resume from exactly where it stopped.
from langgraph.checkpoint.memory import MemorySaver

builder = StateGraph(ReviewState)
builder.add_node("prepare_request", prepare_request)
builder.add_node("human_review", human_review)
builder.add_node("apply_decision", apply_decision)

builder.add_edge(START, "prepare_request")
builder.add_edge("prepare_request", "human_review")
builder.add_edge("human_review", "apply_decision")
builder.add_edge("apply_decision", END)

checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
  1. Run the graph until it interrupts, then resume it with the human response. In production, that response usually comes from your admin UI or approval queue; here we simulate it directly in Python.
config = {"configurable": {"thread_id": "request-123"}}

initial_state = {
    "amount": 5000,
    "reason": "Manual payout adjustment",
    "approved": None,
    "reviewer_note": None,
}

result = graph.invoke(initial_state, config=config)
print(result)
  1. Resume execution by sending a Command(resume=...) back into the same thread. The resume payload must match what your interrupt expects, so keep that contract explicit and stable.
resume_payload = {
    "approved": True,
    "note": "Validated against support ticket #8841",
}

final_result = graph.invoke(
    Command(resume=resume_payload),
    config=config,
)

print(final_result)
  1. If you want to inspect whether the graph paused correctly before resuming, read the interrupt data from the returned result structure in your app layer. This is where you would render an approval screen or store an audit record before calling resume.
if "__interrupt__" in result:
    interrupted_payload = result["__interrupt__"][0].value
    print("Paused for review:")
    print(interrupted_payload)
else:
    print("No interrupt returned")

Testing It

Run the script once and confirm that execution stops at human_review instead of reaching apply_decision. You should see interrupt data containing the amount, reason, and approval prompt.

Then resume using the same thread_id. If checkpointing is wired correctly, LangGraph continues from the paused node and prints either the approved or rejected message.

Test both branches:

  • {"approved": True, "note": "..."}
  • {"approved": False, "note": "..."}

If both paths work and you get deterministic resumption, your human-in-the-loop setup is correct.

Next Steps

  • Replace the hardcoded review payload with a FastAPI endpoint or internal admin UI.
  • Add structured validation for reviewer input using Pydantic before resuming.
  • Move from MemorySaver to a durable checkpointer backed by Postgres or Redis for production use.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides