LlamaIndex Tutorial (TypeScript): adding human-in-the-loop for beginners

By Cyprian AaronsUpdated 2026-04-21
llamaindexadding-human-in-the-loop-for-beginnerstypescript

This tutorial shows you how to add a human approval step into a LlamaIndex TypeScript workflow before an agent takes an action. You need this when the model is allowed to draft answers, but a person must approve sensitive outputs like sending emails, updating records, or escalating claims.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with ts-node or tsx
  • @llamaindex/core
  • @llamaindex/openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with LlamaIndex chat engines or agents
  • A terminal where you can run interactive prompts

Step-by-Step

  1. Install the packages and set up your environment.
    We’ll use LlamaIndex for the LLM call and a simple CLI prompt for the human approval gate.
npm install @llamaindex/core @llamaindex/openai dotenv readline-sync
npm install -D typescript tsx @types/node
  1. Create a small TypeScript file that asks the model to draft an action, then pauses for human approval.
    This pattern is better than letting the agent act immediately because it gives you a controlled review point before anything risky happens.
import "dotenv/config";
import { OpenAI } from "@llamaindex/openai";
import readlineSync from "readline-sync";

const llm = new OpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function main() {
  const userRequest = "Draft a polite email asking a customer to confirm their policy details.";
  const draft = await llm.complete({
    prompt: `Write a short email based on this request: ${userRequest}`,
  });

  console.log("\n--- Draft ---\n");
  console.log(draft.text);

  const approved = readlineSync.keyInYN("\nApprove this draft?");
  console.log(approved ? "\nApproved." : "\nRejected.");
}

main().catch(console.error);
  1. Turn the approval check into a reusable function.
    In production, you do not want approval logic scattered across your codebase. Wrap it so every sensitive workflow uses the same gate.
import readlineSync from "readline-sync";

export function requireHumanApproval(title: string, payload: string): boolean {
  console.log(`\n=== ${title} ===\n`);
  console.log(payload);
  return readlineSync.keyInYN("\nApprove this action?");
}
  1. Use that gate before executing any side effect.
    Here’s the important part: generate the plan or draft first, then block until a person approves it, and only then continue.
import "dotenv/config";
import { OpenAI } from "@llamaindex/openai";
import { requireHumanApproval } from "./approval";

const llm = new OpenAI({ model: "gpt-4o-mini", temperature: 0 });

async function main() {
  const request = "Update customer record with their new phone number.";
  const plan = await llm.complete({
    prompt: `Create a safe step-by-step action plan for: ${request}`,
  });

  const approved = requireHumanApproval("Proposed Action Plan", plan.text);
  if (!approved) {
    console.log("Action cancelled.");
    return;
  }

  console.log("Proceeding with the action...");
}

main().catch(console.error);
  1. If you want this inside an agent flow, keep the tool call behind the same approval step.
    The agent can propose which tool to use, but your application decides whether that tool ever runs.
type ToolAction = {
  toolName: string;
  input: string;
};

function executeTool(action: ToolAction) {
  console.log(`Executing ${action.toolName} with input: ${action.input}`);
}

const proposedAction: ToolAction = {
  toolName: "send_email",
  input: "Email customer asking them to verify their address.",
};

if (requireHumanApproval("Tool Call Request", JSON.stringify(proposedAction, null, 2))) {
  executeTool(proposedAction);
} else {
  console.log("Tool call blocked by human reviewer.");
}

Testing It

Run the script with OPENAI_API_KEY set in your shell or .env file. You should see the model output first, then an approval prompt in your terminal.

Try approving once and rejecting once so you can confirm both branches work. If you approve, the script should continue past the gate; if you reject, it should stop cleanly without running the protected action.

If you are wiring this into a real workflow, test with a harmless side effect first, like writing to a log file instead of sending an email. That gives you confidence that the human checkpoint is placed before execution, not after.

Next Steps

  • Replace readline-sync with a web-based approval UI for internal reviewers
  • Add audit logging for every draft, reviewer decision, and final action
  • Move from one-off prompts to LlamaIndex workflows with explicit state between proposal and approval

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides