LangChain Tutorial (TypeScript): adding human-in-the-loop for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-human-in-the-loop-for-advanced-developerstypescript

This tutorial shows you how to add a human approval step into a LangChain TypeScript workflow before the model can take a sensitive action. You need this when an agent is allowed to draft, decide, or call tools, but a person must still approve high-risk outputs like payments, policy changes, or customer-facing responses.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key in OPENAI_API_KEY
  • Packages:
    • langchain
    • @langchain/openai
    • zod
    • tsx for running TypeScript directly during development
  • A terminal where you can pause for input
  • Basic familiarity with LangChain Runnable patterns and structured outputs

Step-by-Step

  1. Start with a chain that produces a structured action, not free-form text.
    Human-in-the-loop works best when the model returns something your app can validate before execution.
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const ActionSchema = z.object({
  summary: z.string(),
  riskLevel: z.enum(["low", "medium", "high"]),
  requiresApproval: z.boolean(),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const structuredModel = llm.withStructuredOutput(ActionSchema);

const result = await structuredModel.invoke(
  "Draft a response to a customer asking for a refund on an expired subscription."
);

console.log(result);
  1. Add a human approval gate around the risky branch.
    In production, this gate can be a CLI prompt, an internal web UI, Slack approval, or a ticketing system callback.
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";

async function requestApproval(summary: string) {
  const rl = readline.createInterface({ input, output });
  const answer = await rl.question(`Approve this action?\n${summary}\nType yes/no: `);
  rl.close();
  return answer.trim().toLowerCase() === "yes";
}

const approved = await requestApproval(result.summary);

if (!approved) {
  console.log("Action rejected by human.");
  process.exit(0);
}
  1. Wrap the model output and approval logic into one executable workflow.
    This is the part you will actually reuse in an agent: generate candidate action, inspect it, then continue only if approved.
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";

const ActionSchema = z.object({
  summary: z.string(),
  riskLevel: z.enum(["low", "medium", "high"]),
  requiresApproval: z.boolean(),
});

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const structuredModel = llm.withStructuredOutput(ActionSchema);

async function requestApproval(summary: string) {
  const rl = readline.createInterface({ input, output });
  const answer = await rl.question(`Approve this action?\n${summary}\nType yes/no: `);
  rl.close();
  return answer.trim().toLowerCase() === "yes";
}

async function main() {
  const candidate = await structuredModel.invoke(
    "Draft an internal note proposing a $500 goodwill refund for an angry customer."
  );

  console.log("Candidate:", candidate);

  if (candidate.requiresApproval || candidate.riskLevel === "high") {
    const approved = await requestApproval(candidate.summary);
    if (!approved) {
      console.log("Rejected.");
      return;
    }
  }

  console.log("Approved. Continue with downstream execution here.");
}

await main();
  1. Make the approval decision explicit in your prompt so the model knows what needs review.
    Don’t rely on vague “be careful” instructions; define the threshold in terms of business risk and downstream side effects.
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const DecisionSchema = z.object({
  summary: z.string(),
  riskLevel: z.enum(["low", "medium", "high"]),
  requiresApproval: z.boolean(),
});

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const decisionChain = llm.withStructuredOutput(DecisionSchema);

const prompt = `
You are preparing an action for a banking operations workflow.
Mark requiresApproval=true if the action affects money movement,
customer account status, legal wording, or external communication.
`;

const decision = await decisionChain.invoke(prompt + "\nTask: propose next action for reversing an overdraft fee.");
console.log(decision);
  1. Add an audit trail before and after approval.
    In real systems, you want to log who approved what, when they approved it, and what exact payload was executed.
type AuditEvent =
  | { type: "candidate_created"; payload: unknown; timestamp: string }
  | { type: "approval_requested"; payload: unknown; timestamp: string }
  | { type: "approved"; payload: unknown; timestamp: string }
;

const auditLog: AuditEvent[] = [];

function record(event: AuditEvent) {
  auditLog.push(event);
}

record({ type: "candidate_created", payload: result, timestamp: new Date().toISOString() });
record({ type: "approval_requested", payload: result.summary, timestamp: new Date().toISOString() });

if (approved) {
    record({ type: "approved", payload: result.summary, timestamp: new Date().toISOString() });
}

console.log(JSON.stringify(auditLog, null, 2));

Testing It

Run the script with OPENAI_API_KEY set and confirm that the model returns a structured object instead of raw text. Then trigger both paths manually by approving once and rejecting once at the prompt.

Check that high-risk outputs always stop at the human gate before any downstream execution happens. Also verify your audit log records the candidate content and the final decision in order.

If you wire this into a real tool-calling agent later, keep the same pattern:

  • model proposes
  • app validates
  • human approves
  • system executes

Next Steps

  • Replace the CLI approval with Slack or Microsoft Teams interactive approvals
  • Add LangGraph state management so approval becomes part of your agent state machine
  • Persist audit events to Postgres or DynamoDB for compliance review

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides