LangChain Tutorial (TypeScript): adding human-in-the-loop for beginners
This tutorial shows you how to add a human approval step into a LangChain TypeScript workflow before an agent takes a risky action. You need this when the model can draft, decide, or recommend, but a person must approve the final output before anything is sent to a customer, stored in a system, or used in a business process.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeortsx - •These packages:
- •
langchain - •
@langchain/openai - •
zod - •
dotenv
- •
- •An OpenAI API key in
.env - •Basic familiarity with LangChain chat models and prompts
- •A terminal where you can run TypeScript files directly
Install the dependencies:
npm install langchain @langchain/openai zod dotenv
npm install -D typescript tsx @types/node
Step-by-Step
- •Create a simple chain that drafts an action proposal.
The idea is to let the model produce a structured recommendation first. We will not execute anything yet; we only generate something a human can review.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const DraftSchema = z.object({
summary: z.string(),
riskLevel: z.enum(["low", "medium", "high"]),
recommendedAction: z.string(),
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are an insurance operations assistant."],
["human", "Review this claim note and propose the next action: {note}"],
]);
const chain = prompt.pipe(model.withStructuredOutput(DraftSchema));
- •Add a human-in-the-loop approval function.
This is the core pattern. The model prepares a draft, then your app pauses and asks for approval before continuing. In production, this approval step might be a UI button, Slack message, or ticket workflow instead of terminal input.
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";
async function askForApproval(draft: z.infer<typeof DraftSchema>) {
console.log("\nDraft proposal:");
console.log(JSON.stringify(draft, null, 2));
const rl = readline.createInterface({ input, output });
const answer = await rl.question("\nApprove this action? (yes/no): ");
rl.close();
return answer.trim().toLowerCase() === "yes";
}
- •Wrap the chain in an approval gate.
This function runs the model, shows the result to a person, and only proceeds if approved. If rejected, it returns early and nothing downstream executes.
async function runWithHumanApproval(note: string) {
const draft = await chain.invoke({ note });
const approved = await askForApproval(draft);
if (!approved) {
return {
status: "rejected",
message: "Human rejected the proposed action.",
draft,
};
}
return {
status: "approved",
message: `Proceed with: ${draft.recommendedAction}`,
draft,
};
}
- •Add a safe downstream action after approval.
Here we simulate the risky step with a log statement. In your real app, this could create a ticket, update CRM data, send an email, or trigger an internal workflow.
async function executeApprovedAction(result: Awaited<ReturnType<typeof runWithHumanApproval>>) {
if (result.status !== "approved") {
console.log(result.message);
return;
}
console.log("\nExecuting approved action...");
console.log(`Action: ${result.draft.recommendedAction}`);
}
async function main() {
const result = await runWithHumanApproval(
"Customer reports water damage after heavy rain. Policy is active."
);
await executeApprovedAction(result);
}
main().catch(console.error);
- •Run it end-to-end and keep the boundary explicit.
The important part is that the LLM never directly performs the sensitive operation. Your code owns the decision boundary, which makes auditing and policy enforcement much easier.
npx tsx index.ts
If you want one file version for copy-paste testing, combine everything above into index.ts:
import "dotenv/config";
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const DraftSchema = z.object({
summary: z.string(),
riskLevel: z.enum(["low", "medium", "high"]),
recommendedAction: z.string(),
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are an insurance operations assistant."],
["human", "Review this claim note and propose the next action: {note}"],
]);
const chain = prompt.pipe(model.withStructuredOutput(DraftSchema));
async function askForApproval(draft: z.infer<typeof DraftSchema>) {
console.log("\nDraft proposal:");
console.log(JSON.stringify(draft, null, 2));
const rl = readline.createInterface({ input, output });
const answer = await rl.question("\nApprove this action? (yes/no): ");
rl.close();
return answer.trim().toLowerCase() === "yes";
}
async function runWithHumanApproval(note: string) {
const draft = await chain.invoke({ note });
const approved = await askForApproval(draft);
if (!approved) {
return {
status: "rejected" as const,
message: "Human rejected the proposed action.",
draft,
};
}
return {
status: "approved" as const,
message: `Proceed with: ${draft.recommendedAction}`,
draft,
};
}
async function executeApprovedAction(result: Awaited<ReturnType<typeof runWithHumanApproval>>) {
if (result.status !== "approved") {
console.log(result.message);
return;
}
console.log("\nExecuting approved action...");
console.log(`Action: ${result.draft.recommendedAction}`);
}
async function main() {
const result = await runWithHumanApproval(
"Customer reports water damage after heavy rain. Policy is active."
);
await executeApprovedAction(result);
}
main().catch(console.error);
Testing It
Run the file and confirm that it prints a structured draft before asking for approval. Enter no and verify that nothing downstream executes beyond the rejection path.
Then run it again and enter yes. You should see the simulated execution message only after approval.
If you want to test failure handling, disconnect your API key or use an invalid one and confirm that your app fails before any approval prompt appears. That tells you your validation boundary is in the right place.
Next Steps
- •Replace terminal input with a real approval UI in React or Next.js.
- •Add audit logging for every draft, approver identity, timestamp, and final decision.
- •Move from manual prompts to LangGraph when you need multi-step workflows with explicit state transitions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit