CrewAI Tutorial (TypeScript): adding human-in-the-loop for intermediate developers
This tutorial shows you how to pause a CrewAI workflow in TypeScript, route a decision to a human, and then continue execution with the approved input. You need this when an agent is about to make a risky call, needs compliance review, or should ask for clarification instead of guessing.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project already set up
- •
crewaiinstalled in your project - •
dotenvfor environment variables - •An OpenAI API key in
.env - •Basic familiarity with CrewAI agents, tasks, and crews
- •A terminal where you can run a small interactive prompt
Step-by-Step
- •Start by installing the dependencies and setting up your environment variables. Keep the model config external so the human-in-the-loop logic stays focused on orchestration.
npm install crewai dotenv
npm install -D typescript ts-node @types/node
OPENAI_API_KEY=your_key_here
- •Create a small human approval helper. This is the key pattern: the agent produces a draft, your code inspects it, and if the result crosses a threshold you pause for human input before continuing.
// human.ts
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";
export async function askHuman(question: string): Promise<string> {
const rl = readline.createInterface({ input, output });
try {
const answer = await rl.question(`${question}\n> `);
return answer.trim();
} finally {
rl.close();
}
}
- •Build a crew that drafts a customer-facing response. The important part here is that the agent only creates the first pass; your app decides whether to accept it or send it back to a person.
// crew.ts
import "dotenv/config";
import { Agent, Task, Crew } from "crewai";
export const supportAgent = new Agent({
role: "Customer Support Analyst",
goal: "Draft accurate responses for customer issues",
backstory: "You write concise support replies for banking operations teams.",
});
export const draftTask = new Task({
description:
"Draft a reply to this customer complaint: {complaint}. Flag any risky statements.",
expectedOutput: "A short draft reply plus a risk flag.",
agent: supportAgent,
});
export function buildCrew() {
return new Crew({
agents: [supportAgent],
tasks: [draftTask],
});
}
- •Add the human-in-the-loop gate in your application entry point. Here we inspect the draft, ask for approval if needed, then either continue with the human-edited version or stop the workflow.
// index.ts
import { buildCrew } from "./crew";
import { askHuman } from "./human";
async function main() {
const complaint =
"My card was charged twice for the same transaction and I want this fixed today.";
const crew = buildCrew();
const result = await crew.kickoff({ complaint });
const draft = String(result);
const needsReview =
draft.toLowerCase().includes("refund") || draft.toLowerCase().includes("guarantee");
console.log("\n--- Draft ---\n");
console.log(draft);
if (needsReview) {
const approved = await askHuman(
"This response may need review. Paste an approved version or leave blank to stop:"
);
if (!approved) {
console.log("Stopped by human reviewer.");
return;
}
console.log("\n--- Approved Response ---\n");
console.log(approved);
return;
}
console.log("No human review needed.");
}
main().catch(console.error);
- •Run it through TypeScript directly or compile first. For production systems, wire this into your API layer or queue worker so the approval step can be resumed later instead of blocking a process forever.
{
"name": "crewai-hitl-ts",
"private": true,
"type": "module",
"scripts": {
"start": "ts-node index.ts"
}
}
npm run start
Testing It
Run the script and watch for two outcomes: either the draft passes through automatically, or the program pauses and asks for human approval. If you paste an edited response at the prompt, that becomes the approved output and is printed back to the terminal.
Test three cases:
- •A low-risk complaint that should not trigger review
- •A complaint containing sensitive wording like “refund” or “guarantee”
- •An empty approval response to confirm the workflow stops cleanly
If you want stronger verification, log both the original draft and final approved text so you can audit what changed between agent output and human intervention.
Next Steps
- •Persist pending approvals in Postgres or Redis so reviewers can resume work later
- •Replace keyword-based gating with structured risk scoring from a classifier task
- •Add role-based approval routing so compliance, ops, and legal each see only their own queue
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit