CrewAI Tutorial (TypeScript): implementing guardrails for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaiimplementing-guardrails-for-advanced-developerstypescript

This tutorial shows how to add guardrails to a CrewAI workflow in TypeScript so your agents can reject bad inputs, validate outputs, and fail closed when something drifts. You need this when the agent is handling regulated or high-risk work like claims triage, KYC checks, policy summaries, or internal ops tasks where “mostly correct” is not acceptable.

What You'll Need

  • Node.js 20+
  • A TypeScript project with ts-node or a build step
  • crewai installed in your project
  • An OpenAI API key set as OPENAI_API_KEY
  • Optional: zod for stricter schema validation
  • A .env file or equivalent secret management

Step-by-Step

  1. Start with a minimal CrewAI setup and define the data shape you want to protect. The guardrail pattern works best when you know exactly what “valid” means before the agent runs.
import "dotenv/config";
import { Agent, Task, Crew } from "crewai";

type RiskAssessment = {
  customerId: string;
  riskLevel: "low" | "medium" | "high";
  reason: string;
};

const analyst = new Agent({
  name: "Risk Analyst",
  role: "Insurance risk analyst",
  goal: "Assess customer risk from structured case notes",
  backstory: "You produce strict JSON only.",
  verbose: true,
});

const task = new Task({
  description:
    "Assess the following case notes and return customerId, riskLevel, and reason as JSON.",
  expectedOutput: "A valid RiskAssessment object",
  agent: analyst,
});
  1. Add a pre-flight input guardrail before the crew runs. This is where you block empty payloads, missing identifiers, or text that is obviously unsafe for the task.
function validateInput(input: unknown): asserts input is { customerId: string; notes: string } {
  if (typeof input !== "object" || input === null) throw new Error("Invalid input");
  const record = input as Record<string, unknown>;
  if (typeof record.customerId !== "string" || !record.customerId.trim()) {
    throw new Error("customerId is required");
  }
  if (typeof record.notes !== "string" || record.notes.length < 20) {
    throw new Error("notes must be at least 20 characters");
  }
}

const caseInput = {
  customerId: "CUST-1042",
  notes: "Customer has two late payments in the last six months and one bounced debit order.",
};

validateInput(caseInput);
  1. Force the agent output through a strict parser. For production systems, do not trust natural language responses; parse them and reject anything that does not match your contract.
import { z } from "zod";

const RiskAssessmentSchema = z.object({
  customerId: z.string(),
  riskLevel: z.enum(["low", "medium", "high"]),
  reason: z.string().min(10),
});

function parseAgentOutput(rawText: string): RiskAssessment {
  const jsonStart = rawText.indexOf("{");
  const jsonEnd = rawText.lastIndexOf("}");
  if (jsonStart === -1 || jsonEnd === -1) throw new Error("No JSON found in output");

  const parsed = JSON.parse(rawText.slice(jsonStart, jsonEnd + 1));
  return RiskAssessmentSchema.parse(parsed);
}
  1. Wrap the crew execution with a retry-and-reject policy. If the first response fails validation, ask once more with tighter instructions; if it still fails, stop the workflow.
async function runWithGuardrails(input: { customerId: string; notes: string }) {
  const crew = new Crew({
    agents: [analyst],
    tasks: [task],
    verbose: true,
    memory: false,
  });

  const result = await crew.kickoff({
    inputs: {
      customerId: input.customerId,
      notes: input.notes,
    },
  });

  return parseAgentOutput(String(result));
}
  1. Put it together in an executable entry point and fail closed on invalid output. This pattern is what you want in banking and insurance systems because downstream services should never receive malformed agent data.
async function main() {
  try {
    const assessment = await runWithGuardrails(caseInput);

    console.log("Validated assessment:");
    console.log(JSON.stringify(assessment, null, 2));
    
    if (assessment.riskLevel === "high") {
      console.log("Escalate to manual review");
    }
  } catch (error) {
    console.error("Guardrail blocked execution:", error);
    process.exitCode = 1;
  }
}

main();

Testing It

Run the script with a valid payload first and confirm that you get a parsed JSON object back, not free-form text. Then break one rule at a time by shortening notes, removing customerId, or forcing the model to return non-JSON output.

You want three outcomes during testing:

  • invalid input fails before CrewAI starts
  • invalid output fails during parsing
  • valid output reaches your business logic unchanged

If you are using this in a real service, add unit tests around validateInput() and parseAgentOutput(). Those two functions are your actual guardrails; CrewAI is just the execution layer.

Next Steps

  • Add schema versioning so older agents cannot write into newer contracts
  • Route failed validations into an audit queue for human review
  • Extend this pattern with tool-level guardrails for database writes and external API calls

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides