Haystack Tutorial (TypeScript): adding human-in-the-loop for advanced developers
This tutorial shows how to insert a human approval gate into a Haystack TypeScript pipeline before an answer is returned. You need this when the model is making high-risk decisions, generating customer-facing text, or handling workflows where a person must approve, edit, or reject the AI output.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
tsconfig.json - •Haystack TypeScript packages installed:
- •
@haystack/core - •
@haystack/openai
- •
- •An OpenAI API key in
OPENAI_API_KEY - •A terminal and a way to run TypeScript, such as:
- •
tsx - •or
ts-node
- •
- •Basic familiarity with Haystack pipelines and components
Step-by-Step
- •Start by creating a pipeline that generates an answer from retrieved context. The key point is that the model output should be treated as a draft, not the final response.
import { Pipeline } from "@haystack/core";
import { OpenAIChatGenerator } from "@haystack/openai";
const generator = new OpenAIChatGenerator({
model: "gpt-4o-mini",
});
const pipeline = new Pipeline();
pipeline.addComponent("generator", generator);
const result = await pipeline.run({
generator: {
messages: [
{ role: "system", content: "Answer only using the provided context." },
{ role: "user", content: "Draft a response for a mortgage application status update." },
],
},
});
console.log(result.generator.replies[0].content);
- •Add a human review function between generation and release. In production, this would be an internal UI, queue, or Slack approval flow; here we simulate it with a terminal prompt so the pattern is concrete.
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";
export async function requestHumanApproval(draft: string): Promise<boolean> {
const rl = readline.createInterface({ input, output });
console.log("\n--- Draft for review ---\n");
console.log(draft);
console.log("\nApprove this response? (y/n)");
const answer = (await rl.question("> ")).trim().toLowerCase();
rl.close();
return answer === "y" || answer === "yes";
}
- •Wrap the pipeline result so the final response only leaves your system after approval. This is the part most teams miss: the model can produce content, but your application owns release control.
import { Pipeline } from "@haystack/core";
import { OpenAIChatGenerator } from "@haystack/openai";
import { requestHumanApproval } from "./human-review.js";
const generator = new OpenAIChatGenerator({ model: "gpt-4o-mini" });
const pipeline = new Pipeline();
pipeline.addComponent("generator", generator);
const result = await pipeline.run({
generator: {
messages: [
{ role: "system", content: "Write concise customer support replies." },
{ role: "user", content: "Explain why my claim was delayed." },
],
},
});
const draft = result.generator.replies[0].content;
const approved = await requestHumanApproval(draft);
if (!approved) {
throw new Error("Human rejected the draft response.");
}
console.log("\nFinal approved response:\n");
console.log(draft);
- •If you want real operational value, store both the draft and the decision. That gives you auditability, which matters for regulated workflows and post-incident review.
type ReviewRecord = {
requestId: string;
draft: string;
approved: boolean;
reviewedAt: string;
};
function saveReview(record: ReviewRecord) {
console.log(JSON.stringify(record, null, 2));
}
const record: ReviewRecord = {
requestId: crypto.randomUUID(),
draft,
approved,
reviewedAt: new Date().toISOString(),
};
saveReview(record);
- •For advanced systems, route only specific cases to human review instead of every request. A common pattern is to auto-approve low-risk outputs and escalate anything involving money movement, legal language, or low confidence retrieval.
function needsReview(text: string): boolean {
const riskyPatterns = [
/refund/i,
/terminate/i,
/legal/i,
/deny/i,
/payment/i,
];
return riskyPatterns.some((pattern) => pattern.test(text));
}
if (needsReview(draft)) {
const approvedRiskyDraft = await requestHumanApproval(draft);
if (!approvedRiskyDraft) throw new Error("Rejected by reviewer.");
}
Testing It
Run the script with a valid OPENAI_API_KEY and confirm you see a generated draft before any final output appears. Approve it once with y, then reject it with n and verify the process stops with an error.
Test one low-risk prompt and one high-risk prompt so you can confirm your routing logic behaves differently. If you added audit logging, check that each run writes the draft plus approval state.
If you are wiring this into an API server, make sure the request does not return the AI answer until approval has been recorded. That separation is what turns “human-in-the-loop” from a demo into a control point.
Next Steps
- •Replace the terminal prompt with a real reviewer workflow in Slack, Teams, or an internal dashboard
- •Add confidence scoring and policy rules to route only risky cases to humans
- •Persist review records in Postgres so approvals are searchable during audits
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit