LlamaIndex Tutorial (TypeScript): adding human-in-the-loop for advanced developers
This tutorial shows how to pause a LlamaIndex workflow, hand a decision to a human, and resume execution in TypeScript. You need this when an agent is about to take a risky action — like approving a refund, escalating a claim, or sending customer data — and you want deterministic human approval before continuing.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
ts-nodeortsx - •
@llamaindex/core - •
zod - •An OpenAI API key if you want to plug the workflow into an LLM later
- •A terminal for running the example locally
Install the packages:
npm install @llamaindex/core zod
npm install -D typescript tsx @types/node
Step-by-Step
- •Start by defining the shape of the decision you want from a human. Keep it explicit; if you ask for free-form feedback, your workflow becomes hard to validate and harder to resume safely.
import { z } from "zod";
export const ApprovalSchema = z.object({
approved: z.boolean(),
reason: z.string().min(1),
});
export type Approval = z.infer<typeof ApprovalSchema>;
- •Next, create a small helper that simulates a human review step. In production, this would be replaced by Slack, an internal UI, or a queue consumer that writes back the decision.
import { ApprovalSchema, type Approval } from "./approval";
export async function requestHumanApproval(
context: { customerId: string; amount: number }
): Promise<Approval> {
console.log("Human review needed:", context);
const raw = {
approved: true,
reason: "Refund is within policy and customer has prior escalation history.",
};
return ApprovalSchema.parse(raw);
}
- •Now wire that human gate into a LlamaIndex workflow. The important part is that the workflow stops at the decision point and only continues when it receives validated input.
import { Workflow, Event, step } from "@llamaindex/core/workflow";
import { requestHumanApproval } from "./human-review";
import type { Approval } from "./approval";
class ReviewEvent extends Event<{ customerId: string; amount: number }> {}
class ApprovedEvent extends Event<Approval> {}
class RejectedEvent extends Event<Approval> {}
class RefundWorkflow extends Workflow {
@step()
async review(event: ReviewEvent) {
const decision = await requestHumanApproval(event.data);
if (decision.approved) {
return new ApprovedEvent(decision);
}
return new RejectedEvent(decision);
}
@step()
async approve(event: ApprovedEvent) {
console.log("Approved:", event.data.reason);
return { status: "approved" as const };
}
@step()
async reject(event: RejectedEvent) {
console.log("Rejected:", event.data.reason);
return { status: "rejected" as const };
}
}
- •Add an entrypoint that starts the workflow with real business data. This is where you would pass in your extracted agent output after retrieval, classification, or tool use.
import { RefundWorkflow } from "./workflow";
import { ReviewEvent } from "./events";
async function main() {
const wf = new RefundWorkflow();
const result = await wf.run(new ReviewEvent({
customerId: "cust_123",
amount: 250,
}));
console.log("Final result:", result);
}
main().catch(console.error);
- •For production use, persist the human decision separately from the workflow execution. That gives you auditability, replay support, and a clean boundary between agent logic and operational controls.
import fs from "node:fs/promises";
import type { Approval } from "./approval";
export async function saveDecision(
requestId: string,
decision: Approval
): Promise<void> {
await fs.writeFile(
`./decisions/${requestId}.json`,
JSON.stringify(
{
requestId,
...decision,
decidedAt: new Date().toISOString(),
},
null,
2
)
);
}
Testing It
Run the entrypoint with tsx and confirm that the workflow prints the review context before producing a final status. If you set approved to false, it should route to the rejection path without changing any other code.
A good test is to swap the hardcoded raw object with different cases:
- •approved refund under policy limit
- •rejected refund with missing justification
- •malformed payload missing
reason
The last case should fail fast at schema validation, which is what you want in a human-in-the-loop system. That protects your downstream steps from garbage input and makes review failures visible immediately.
Next Steps
- •Replace the simulated review helper with a Slack bot or internal admin UI
- •Add durable storage for pending approvals so workflows can resume after restarts
- •Combine this pattern with LlamaIndex retrieval so humans only review low-confidence or high-risk outputs
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit