AutoGen Tutorial (TypeScript): adding human-in-the-loop for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogenadding-human-in-the-loop-for-intermediate-developerstypescript

This tutorial shows how to pause an AutoGen TypeScript workflow for a real person to review, edit, or approve an agent response before execution continues. You need this when the model is making decisions that affect customers, money, or compliance and you want a human checkpoint in the middle of the loop.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or tsx
  • AutoGen installed:
    • npm install @autogenai/autogen
  • An OpenAI API key exported as OPENAI_API_KEY
  • Basic familiarity with AssistantAgent, UserProxyAgent, and message passing
  • A terminal you can type into during runtime

Step-by-Step

  1. Start with a minimal AutoGen setup: one assistant agent and one user proxy agent. The human-in-the-loop pattern works best when the proxy agent is the gatekeeper for anything risky.
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  modelClientOptions: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  },
});

const user = new UserProxyAgent({
  name: "human_proxy",
});
  1. Add a function that pauses execution and asks a human for approval. In production, this can be wired to Slack, a web form, or an internal workflow tool; here we use stdin so you can run it locally and see the full control flow.
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";

export async function askHuman(prompt: string): Promise<string> {
  const rl = readline.createInterface({ input, output });
  const answer = await rl.question(`${prompt}\n> `);
  rl.close();
  return answer.trim();
}
  1. Run the assistant first, then stop before taking action if the output needs review. This is the core pattern: let the model draft, but force a human decision before any downstream side effect like sending email, approving claims, or updating records.
async function main() {
  const task = "Draft a customer-friendly response explaining why a refund was denied.";
  const result = await assistant.run(task);

  const draft = result.messages.at(-1)?.content ?? "";
  console.log("\nAssistant draft:\n", draft);

  const decision = await askHuman("Approve this response? Type 'approve' or paste edits");
  if (decision.toLowerCase() !== "approve") {
    console.log("\nHuman edited version:\n", decision);
    return;
  }

  console.log("\nApproved response:\n", draft);
}

main().catch(console.error);
  1. If you need a true intermediate checkpoint inside a multi-agent flow, wrap the handoff in an explicit approval step. This keeps your orchestration deterministic and makes it obvious where the process can pause.
type ApprovalResult =
  | { approved: true }
  | { approved: false; editedText: string };

async function requireApproval(draft: string): Promise<ApprovalResult> {
  console.log("\nProposed action:\n", draft);
  const response = await askHuman(
    "Approve? Type 'approve' to continue, or paste revised text"
  );

  if (response.toLowerCase() === "approve") {
    return { approved: true };
  }

  return { approved: false, editedText: response };
}
  1. Put it together in a production-shaped flow where the assistant generates content, the human reviews it, and only then does your app continue. If you later replace stdout with a UI or ticketing system, this structure stays the same.
async function runWithHumanInTheLoop() {
  const prompt =
    "Write a short internal note summarizing why a loan application was escalated.";

  const result = await assistant.run(prompt);
  const draft = result.messages.at(-1)?.content ?? "";

  const approval = await requireApproval(draft);

  if (!approval.approved) {
    console.log("\nUsing human-edited text:");
    console.log(approval.editedText);
    return;
  }

  console.log("\nProceeding with approved assistant output:");
  console.log(draft);
}

runWithHumanInTheLoop().catch(console.error);

Testing It

Run the script with your API key set and confirm that the assistant produces a draft first, then waits for your input before continuing. Type approve and verify that execution proceeds without modification.

Then rerun it and paste an edited version instead of approving. You should see your edited text returned and no downstream action taken.

For a real system, test three cases:

  • Approve as-is
  • Reject and edit
  • Timeout or no response

That last case matters because human review systems fail in practice when nobody is watching the queue.

Next Steps

  • Replace stdin with a Slack button or web approval form
  • Add structured outputs so humans review JSON instead of free text
  • Persist every draft, edit, approver ID, and timestamp for audit trails

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides