Haystack Tutorial (TypeScript): adding human-in-the-loop for intermediate developers
This tutorial shows how to insert a human approval step into a Haystack TypeScript pipeline before an answer is returned. You need this when the model is allowed to draft responses, but a person must review risky outputs like compliance advice, claims decisions, or customer-facing messages.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
ts-nodeortsx - •
@haystack/core - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with Haystack pipelines and components
- •A terminal for running the script and answering prompts
Step-by-Step
- •Start by installing the package and setting up a simple project file. We’ll build a pipeline that drafts an answer, then pauses for a human decision before returning the final output.
npm install @haystack/core
npm install -D typescript tsx @types/node
- •Define two custom components: one to generate a draft answer and one to ask for approval in the terminal. The approval component is the human-in-the-loop gate; it either forwards the draft or blocks it.
import { Component, OutputTypes } from "@haystack/core";
import { createInterface } from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";
@Component({
name: "DraftAnswer",
inputs: ["question"],
outputs: ["draft"],
})
class DraftAnswer {
async run({ question }: { question: string }) {
return {
draft: `Draft response for: ${question}\n\nThis should be reviewed by a human before sending.`,
};
}
}
@Component({
name: "HumanApproval",
inputs: ["draft"],
outputs: ["approved", "final"],
})
class HumanApproval {
async run({ draft }: { draft: string }) {
const rl = createInterface({ input, output });
console.log("\n--- DRAFT ---\n" + draft);
const answer = (await rl.question("\nApprove? (y/n): ")).trim().toLowerCase();
rl.close();
return answer === "y"
? { approved: true, final: draft }
: { approved: false, final: "Request rejected by reviewer." };
}
}
- •Wire the components into a pipeline. The key detail is that the model output does not go straight to the user; it must pass through the approval step first.
import { Pipeline } from "@haystack/core";
const pipeline = new Pipeline();
pipeline.addComponent("draft", new DraftAnswer());
pipeline.addComponent("approval", new HumanApproval());
pipeline.connect("draft.draft", "approval.draft");
- •Run the pipeline with an example question and print only the approved result. In production, this is where you would replace the terminal prompt with Slack, Jira, or an internal review UI.
async function main() {
const result = await pipeline.run({
draft: {
question: "Can we waive this customer's late fee?",
},
});
console.log("\n--- FINAL ---");
console.log(result.approval.final);
}
main().catch(console.error);
- •If you want this to behave like a real intermediate control point, add structured metadata so reviewers can see why they’re approving something. That usually means including confidence, policy flags, or source citations in the payload.
@Component({
name: "RiskAnnotator",
inputs: ["draft"],
outputs: ["reviewPacket"],
})
class RiskAnnotator {
async run({ draft }: { draft: string }) {
return {
reviewPacket: {
text: draft,
riskLevel: "medium",
reason: "Customer-facing financial decision requires manual review.",
},
};
}
}
You would then connect draft.draft to risk.reviewPacket, and send that object into your approval component instead of plain text.
Testing It
Run the script with npx tsx your-file.ts. When prompted, enter y to approve or n to reject. If you approve it, the final output should match the drafted text; if you reject it, you should see the rejection message instead.
Test both paths before shipping anything. You want to confirm that blocked outputs never reach downstream systems like email senders, CRM updates, or case management tools.
A good production check is to log every approval decision with timestamp, reviewer ID, and request ID. That gives you an audit trail when compliance asks who approved what and why.
Next Steps
- •Replace terminal approval with a web-based review queue using your internal admin app
- •Add policy checks before human review so only risky cases get escalated
- •Store reviewer decisions in your database for auditability and retraining data
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit