LangGraph Tutorial (TypeScript): implementing guardrails for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langgraphimplementing-guardrails-for-intermediate-developerstypescript

This tutorial shows how to add guardrails to a LangGraph TypeScript agent so it can reject unsafe requests, validate model output, and route bad cases into a safe fallback path. You need this when your graph is allowed to touch user-generated content, internal tools, or regulated workflows where “let the model decide” is not acceptable.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • langgraph
  • @langchain/openai
  • @langchain/core
  • An OpenAI API key set as OPENAI_API_KEY
  • A project with "type": "module" in package.json

Install the packages:

npm install langgraph @langchain/openai @langchain/core
npm install -D typescript tsx @types/node

Step-by-Step

  1. Start with a small graph state that carries the user input, the model output, and any guardrail decisions. Keep the state explicit; that makes routing and testing much easier than hiding everything inside prompt text.
import { Annotation, StateGraph, START, END } from "langgraph";
import { ChatOpenAI } from "@langchain/openai";

const GraphState = Annotation.Root({
  input: Annotation<string>(),
  blocked: Annotation<boolean>(),
  reason: Annotation<string>(),
  response: Annotation<string>(),
});

type GraphStateType = typeof GraphState.State;

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
  1. Add a pre-model guardrail that blocks obviously unsafe requests before they reach the LLM. In production you would expand this list or replace it with a policy service, but the pattern stays the same: inspect state, set flags, and route accordingly.
async function preGuardrail(state: GraphStateType) {
  const blockedPatterns = [
    /password/i,
    /credit card/i,
    /ssn/i,
    /social security/i,
  ];

  const isBlocked = blockedPatterns.some((pattern) => pattern.test(state.input));

  return {
    blocked: isBlocked,
    reason: isBlocked ? "Request matched a restricted data pattern." : "",
  };
}

async function safeFallback() {
  return {
    response:
      "I can't help with that request. Please remove sensitive data or rephrase it.",
  };
}
  1. Add your main LLM node and a post-model validator. This catches bad output even when the prompt is fine, which matters because guardrails need to protect both sides of the exchange.
async function generateResponse(state: GraphStateType) {
  const result = await model.invoke([
    {
      role: "system",
      content:
        "You are a helpful assistant. Never request sensitive personal data.",
    },
    { role: "user", content: state.input },
  ]);

  return { response: result.content.toString() };
}

async function postGuardrail(state: GraphStateType) {
  const forbiddenOutput = [/password/i, /credit card/i, /ssn/i];
  const violatesPolicy = forbiddenOutput.some((pattern) =>
    pattern.test(state.response)
  );

  return {
    blocked: violatesPolicy,
    reason: violatesPolicy
      ? "Model output contained restricted data handling language."
      : state.reason,
  };
}
  1. Wire the graph with conditional routing so blocked requests never hit the model, and unsafe outputs get replaced with a fallback response. This is the core pattern you want in real systems: decision nodes first, generation second, validation last.
const workflow = new StateGraph(GraphState)
  .addNode("pre_guardrail", preGuardrail)
  .addNode("generate", generateResponse)
  .addNode("post_guardrail", postGuardrail)
  .addNode("fallback", safeFallback)
  .addEdge(START, "pre_guardrail")
  .addConditionalEdges("pre_guardrail", (state) =>
    state.blocked ? "fallback" : "generate"
  )
  .addEdge("generate", "post_guardrail")
  .addConditionalEdges("post_guardrail", (state) =>
    state.blocked ? "fallback" : END
  )
  .addEdge("fallback", END);

const app = workflow.compile();
  1. Run the graph against both allowed and blocked inputs so you can see each branch behave correctly. Keep these tests in code while you’re building; they’re cheap regression checks for policy changes later.
async function main() {
  const allowed = await app.invoke({
    input: "Summarize why input validation matters in web apps.",
    blocked: false,
    reason: "",
    response: "",
  });

  const denied = await app.invoke({
    input: "Here is my credit card number, help me store it safely.",
    blocked: false,
    reason: "",
    response: "",
  });

  console.log("ALLOWED:", allowed);
  console.log("DENIED:", denied);
}

main().catch(console.error);

Testing It

Run the file with tsx:

npx tsx src/index.ts

For the allowed case, you should see a normal model response in response and blocked should end up false after validation. For the denied case, the graph should skip generation entirely and return the fallback message.

If you want to verify routing more aggressively, log inside each node or assert on final state in a small test file. The important thing is that your policy decision happens before generation for input safety, and after generation for output safety.

Next Steps

  • Replace regex checks with structured policy rules backed by your own moderation service or DLP API.
  • Add an escalation branch that sends blocked requests to human review instead of only returning a fallback.
  • Extend state with trace metadata so you can audit which rule fired and why for every run.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides