CrewAI Tutorial (TypeScript): implementing guardrails for beginners
This tutorial shows how to add guardrails to a CrewAI workflow in TypeScript so your agents reject bad inputs, unsafe outputs, and malformed data before they reach downstream systems. You need this when you’re building agentic apps for regulated environments, where “close enough” is not good enough.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeor a build step viatsc - •CrewAI for TypeScript installed in your project
- •An LLM API key, such as:
- •
OPENAI_API_KEY
- •
- •Basic familiarity with:
- •
Agent - •
Task - •
Crew - •
Process
- •
- •A terminal and a code editor
Install the package set first:
npm install @crewai/crewai dotenv zod
npm install -D typescript ts-node @types/node
Step-by-Step
1) Set up your environment
Keep secrets in .env, then load them at runtime. For guardrails, you want deterministic checks before the model response is accepted.
import "dotenv/config";
if (!process.env.OPENAI_API_KEY) {
throw new Error("Missing OPENAI_API_KEY");
}
export const config = {
apiKey: process.env.OPENAI_API_KEY,
};
2) Define a strict output shape
Use Zod to validate the agent output. This is the simplest guardrail pattern: if the model returns something outside the schema, reject it and retry or fail fast.
import { z } from "zod";
export const ClaimSummarySchema = z.object({
claimId: z.string().min(1),
decision: z.enum(["approve", "reject", "review"]),
reason: z.string().min(20),
});
export type ClaimSummary = z.infer<typeof ClaimSummarySchema>;
3) Build an agent and task with a guardrail function
In CrewAI TypeScript, you attach a guardrail to the task. The guardrail inspects the raw result and either returns a validated value or throws an error.
import { Agent, Task } from "@crewai/crewai";
import { ClaimSummarySchema } from "./schema.js";
const claimsAgent = new Agent({
role: "Claims Analyst",
goal: "Summarize insurance claims decisions safely",
backstory: "You validate claim outcomes against company policy.",
});
export const claimTask = new Task({
description:
"Review the claim notes and return JSON with claimId, decision, and reason.",
expectedOutput:
'A JSON object like {"claimId":"CLM-123","decision":"review","reason":"..."}',
agent: claimsAgent,
guardrail: async (result: string) => {
const parsed = JSON.parse(result);
return ClaimSummarySchema.parse(parsed);
},
});
The key detail here is that the guardrail returns structured data, not just a boolean. That gives you one place to enforce schema rules before anything continues.
4) Add a retry strategy for failed validations
Guardrails are most useful when they can recover from small mistakes. If parsing fails or the schema is wrong, ask the model again with tighter instructions.
import { Task } from "@crewai/crewai";
import { claimTask } from "./task.js";
export const guardedClaimTask = new Task({
...claimTask,
maxRetries: 2,
description:
`${claimTask.description}\nReturn valid JSON only. No markdown.`,
});
This keeps your workflow resilient without letting malformed output slip through. In production, two retries is usually enough before escalating to a human review queue.
5) Run the crew and consume validated output
Once the task passes its guardrail, you can safely use the result as typed data. The important part is that downstream code never sees raw LLM text.
import { Crew, Process } from "@crewai/crewai";
import { config } from "./config.js";
import { guardedClaimTask } from "./guarded-task.js";
const crew = new Crew({
agents: [guardedClaimTask.agent],
tasks: [guardedClaimTask],
process: Process.sequential,
});
async function main() {
const result = await crew.kickoff({
inputs: {
claim_notes:
"Customer reports water damage after pipe burst. Photos attached.",
},
apiKey: config.apiKey,
});
console.log("Validated result:", result);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
If your version of CrewAI expects API keys through environment variables instead of kickoff inputs, keep OPENAI_API_KEY in .env and let the SDK pick it up automatically.
Testing It
Run the script with valid input first and confirm you get back a structured object with claimId, decision, and reason. Then intentionally break the prompt by asking for free-form text or by returning invalid JSON; the guardrail should reject it before your app uses it.
Next, test schema failures by making decision something outside the allowed enum, like "maybe". You should see validation fail consistently, which tells you your boundary is working. In a real app, wire that failure into logging plus a fallback path such as manual review.
Next Steps
- •Add content filters before task execution for PII, policy violations, and prompt injection
- •Chain multiple guardrails: syntax validation, business-rule validation, then human approval
- •Persist rejected outputs so you can analyze failure patterns and tighten prompts later
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit