AutoGen Tutorial (TypeScript): adding human-in-the-loop for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogenadding-human-in-the-loop-for-advanced-developerstypescript

This tutorial shows how to insert a human approval step into an AutoGen TypeScript workflow without breaking the agent loop. You’d use this when the model is about to do something expensive, risky, or irreversible, and you want a developer or operator to approve, edit, or reject the action first.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build pipeline
  • autogen-core and autogen-agentchat installed
  • An OpenAI API key exported as OPENAI_API_KEY
  • Basic familiarity with AutoGen agents, messages, and model clients
  • A terminal where you can run interactive prompts

Install the packages:

npm install autogen-core autogen-agentchat @openai/openai typescript ts-node

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a normal assistant agent. The only difference in this tutorial is that we’ll wrap its output in a human gate before any “final” action happens.
import { OpenAIChatCompletionClient } from "@autogen/openai";
import { AssistantAgent } from "autogen-agentchat";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient,
  systemMessage: "You are a careful assistant that proposes actions clearly.",
});
  1. Add a small approval function. In production this might call Slack, a web UI, or an internal approval service; here we use stdin so the pattern is runnable end-to-end.
import readline from "node:readline/promises";
import process from "node:process";

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function askForApproval(summary: string): Promise<boolean> {
  console.log("\nProposed action:");
  console.log(summary);
  const answer = await rl.question("Approve? (y/n): ");
  return answer.trim().toLowerCase() === "y";
}
  1. Run the agent once, then stop before executing anything sensitive. The key idea is to treat the model’s response as a proposal, not as an instruction you immediately trust.
async function main() {
  const task =
    "Draft an email asking a customer to confirm a bank transfer of $25,000.";

  const result = await assistant.run(task);
  const proposal = result.messages.at(-1)?.content ?? "";

  const approved = await askForApproval(proposal);

  if (!approved) {
    console.log("Rejected by human reviewer.");
    rl.close();
    return;
  }

  console.log("Approved. Proceeding with downstream action...");
}

main().catch((err) => {
  console.error(err);
  rl.close();
  process.exit(1);
});
  1. If you need the human to edit the content instead of just approve it, capture their revision and feed that into the next stage. This is the pattern you want for compliance-heavy flows where operators must correct language before sending.
async function askForEdit(original: string): Promise<string> {
  console.log("\nOriginal proposal:");
  console.log(original);
  const edited = await rl.question("\nPaste approved version:\n");
  return edited.trim();
}

async function mainWithEdit() {
  const task =
    "Write a short internal note explaining why a payment was delayed.";

  const result = await assistant.run(task);
  const proposal = result.messages.at(-1)?.content ?? "";
  const approvedText = await askForEdit(proposal);

  console.log("\nFinal human-approved text:");
  console.log(approvedText);
}
  1. For advanced workflows, gate only specific tool calls instead of every response. This is how you keep automation high while still protecting dangerous operations like sending emails, creating tickets, or initiating payments.
type ToolCall = {
  name: string;
  arguments: Record<string, unknown>;
};

async function approveToolCall(call: ToolCall): Promise<boolean> {
  const summary = `${call.name}(${JSON.stringify(call.arguments)})`;
  return askForApproval(summary);
}

async function executeSensitiveAction(call: ToolCall) {
  if (!(await approveToolCall(call))) {
    console.log("Tool call blocked.");
    return;
  }

  console.log(`Executing ${call.name} with`, call.arguments);
}

executeSensitiveAction({
  name: "send_email",
  arguments: { to: "customer@example.com", subject: "Transfer confirmation" },
});

Testing It

Run the script and confirm the agent produces output before any side effect happens. Then test both branches: approve once with y, and reject once with n, making sure the program stops cleanly in both cases.

If you implemented edit mode, paste back a modified version and verify that your downstream code uses the edited text rather than the raw model output. For tool gating, log every proposed call so you can audit what was blocked versus approved.

A good production test is to simulate three cases:

  • harmless content gets auto-approved by policy
  • sensitive content requires manual approval
  • rejected content never reaches the external system

Next Steps

  • Add policy-based routing so low-risk requests skip human review while high-risk ones require it.
  • Replace stdin with Slack, Teams, or an internal web approval page.
  • Combine this with structured outputs so reviewers approve JSON payloads instead of free-form text.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides