LangGraph Tutorial (Python): adding human-in-the-loop for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-human-in-the-loop-for-advanced-developerspython

This tutorial shows how to pause a LangGraph workflow, route a decision to a human, and resume execution with that approval or edit. You need this when an agent is allowed to draft actions, but a person must review high-risk steps like sending emails, approving claims, or changing customer records.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-core
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with LangGraph state graphs and conditional edges
  • A terminal for running the script and testing interrupts

Step-by-Step

  1. Start with a state model that can carry the user request, the model draft, and the human decision. For human-in-the-loop workflows, keep the state explicit so you can serialize it, inspect it, and resume it later.
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    draft: str
    approved: bool
    final_answer: str
  1. Build a node that creates a draft response. In real systems this is usually where the LLM proposes an action or answer before any side effect happens.
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

def draft_node(state: AgentState):
    response = llm.invoke(state["messages"])
    return {
        "draft": response.content,
        "messages": state["messages"] + [response],
    }
  1. Add a human review node that pauses execution using interrupt(). The key detail is that the graph stops here and waits for an external resume payload, which makes it suitable for approval flows in production.
from langgraph.types import interrupt

def human_review_node(state: AgentState):
    decision = interrupt(
        {
            "draft": state["draft"],
            "prompt": "Approve this draft? Return {'approved': True/False, 'edited_text': '...'}"
        }
    )
    approved = decision["approved"]
    edited_text = decision.get("edited_text", state["draft"])
    return {
        "approved": approved,
        "final_answer": edited_text if approved else "Rejected by human reviewer.",
    }
  1. Wire the graph so execution goes from drafting to review to completion. This keeps the control flow simple: generate first, then gate on human input, then finalize.
def build_graph():
    graph = StateGraph(AgentState)
    graph.add_node("draft", draft_node)
    graph.add_node("review", human_review_node)

    graph.add_edge(START, "draft")
    graph.add_edge("draft", "review")
    graph.add_edge("review", END)

    return graph.compile()
  1. Run the workflow with a checkpointer so LangGraph can persist the paused state. Without persistence you cannot resume after interruption in a reliable way.
import os
from langgraph.checkpoint.memory import MemorySaver

app = build_graph()
checkpointer = MemorySaver()
app = build_graph().compile(checkpointer=checkpointer)

config = {"configurable": {"thread_id": "case-123"}}

initial_state = {
    "messages": [("user", "Draft a customer email about a failed payment.")],
    "draft": "",
    "approved": False,
    "final_answer": "",
}

result = app.invoke(initial_state, config=config)
print(result)
  1. Resume the paused run by passing the human decision back into the same thread. In practice this is what your admin UI or internal review tool would do after someone clicks approve or edits text.
resume_payload = {
    "approved": True,
    "edited_text": "Hi Alex, your payment did not go through. Please update your card details and try again."
}

final_result = app.invoke(resume_payload, config=config)
print(final_result["final_answer"])

Testing It

Run the script once and confirm it stops at the interrupt point instead of returning a final answer immediately. You should see a payload containing the draft and review prompt.

Then call invoke() again with the same thread_id and your approval payload. If checkpointing is working correctly, LangGraph resumes from the review node rather than starting over.

For a real test, change approved to False and verify you get the rejection path back in final_answer. Also try editing edited_text to make sure your reviewer can modify content before release.

Next Steps

  • Add conditional routing so low-risk requests skip human review while high-risk requests pause.
  • Replace MemorySaver with a persistent checkpointer like PostgreSQL for production deployments.
  • Build an internal reviewer UI that reads interrupt payloads and posts resume decisions back to LangGraph.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides