CrewAI Tutorial (TypeScript): adding audit logs for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-audit-logs-for-intermediate-developerstypescript

This tutorial shows you how to add audit logs to a CrewAI TypeScript workflow so every agent task, tool call, and final output gets written to a durable log. You need this when you’re building regulated workflows, supportable automation, or anything where you must explain who did what and when.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project already set up
  • crewai installed in your project
  • dotenv for loading environment variables
  • An OpenAI API key in .env
  • A writable log file location on disk
  • Basic familiarity with CrewAI agents, tasks, and crews

Step-by-Step

  1. Start by installing the packages and creating a minimal TypeScript setup that can run CrewAI code. If you already have a project, keep the install step and skip the scaffolding.
npm install crewai dotenv
npm install -D typescript tsx @types/node
  1. Add your API key and create a small audit helper. The helper writes structured JSON lines so you can grep, ship, or ingest them later without parsing free-form text.
import fs from "node:fs";
import path from "node:path";

const logDir = path.resolve("logs");
const logFile = path.join(logDir, "audit.log");

export function audit(event: string, data: Record<string, unknown>) {
  fs.mkdirSync(logDir, { recursive: true });
  fs.appendFileSync(
    logFile,
    JSON.stringify({ ts: new Date().toISOString(), event, ...data }) + "\n"
  );
}
  1. Define an agent and task that use explicit audit points before and after execution. In CrewAI TypeScript, you want the logging outside the LLM call so failures in prompts or tools are still captured.
import "dotenv/config";
import { Agent, Task } from "crewai";
import { audit } from "./audit";

const analyst = new Agent({
  name: "Compliance Analyst",
  role: "Summarize case notes for review",
  goal: "Produce concise findings with traceable reasoning",
  backstory: "You work on regulated workflows and must be precise.",
});

const task = new Task({
  description: "Review the case notes and return a short compliance summary.",
  expectedOutput: "A short bullet list of findings.",
  agent: analyst,
});

audit("task_created", { agent: analyst.name, task: task.description });
  1. Wrap crew execution in a function that logs start, success, and failure. This is the part most teams skip; if the crew throws, you still need a record of the attempt.
import { Crew } from "crewai";
import { audit } from "./audit";
import { analystTask } from "./task";

export async function runCrew() {
  const crew = new Crew({
    agents: [analystTask.agent],
    tasks: [analystTask],
  });

  audit("crew_run_started", {
    crewSize: 1,
    taskCount: 1,
  });

  try {
    const result = await crew.kickoff();
    audit("crew_run_succeeded", {
      outputPreview: String(result).slice(0, 500),
    });
    return result;
  } catch (error) {
    audit("crew_run_failed", {
      error: error instanceof Error ? error.message : String(error),
    });
    throw error;
  }
}
  1. Add a tool-level audit hook if your crew uses tools. This gives you evidence for external side effects like database lookups or HTTP calls, which is usually what auditors care about first.
import { Tool } from "crewai";
import { audit } from "./audit";

export const getCaseStatusTool = new Tool({
  name: "get_case_status",
  description: "Fetch case status by case id",
  execute: async (caseId: string) => {
    audit("tool_called", { tool: "get_case_status", caseId });

    const status = {
      caseId,
      state: "open",
      owner: "ops-team",
    };

    audit("tool_returned", { tool: "get_case_status", caseId, state: status.state });
    return JSON.stringify(status);
  },
});
  1. Put it together in an entry file and run it with tsx. Keep the output simple at first; once the logs are correct, you can enrich them with request IDs, user IDs, or tenant IDs.
import { Agent, Crew, Task } from "crewai";
import { audit } from "./audit";

async function main() {
  const analyst = new Agent({
    name: "Compliance Analyst",
    role: "Summarize case notes for review",
    goal: "Produce concise findings with traceable reasoning",
    backstory: "You work on regulated workflows and must be precise.",
  });

  const task = new Task({
    description: "Review the case notes and return a short compliance summary.",
    expectedOutput: "A short bullet list of findings.",
    agent: analyst,
  });

  const crew = new Crew({ agents: [analyst], tasks: [task] });

  audit("run_started", { agent: analyst.name });

  const result = await crew.kickoff();
  audit("run_completed", { outputPreview: String(result).slice(0, 300) });

  console.log(result);
}

main().catch((error) => {
  audit("run_failed", {
    error: error instanceof Error ? error.message : String(error),
  });
  process.exitCode = 1;
});

Testing It

Run the script once with a valid OpenAI key and confirm that logs/audit.log is created. You should see one JSON object per line with timestamps and event names like run_started, crew_run_succeeded, or tool_called.

Then force a failure by removing your API key or breaking the prompt on purpose. The important check is that run_failed still gets written even when the crew crashes before returning output.

If you use tools, inspect whether every external call has both an entry log and a return log. That gives you enough detail to reconstruct execution without turning your app into a noisy debug dump.

Next Steps

  • Add correlation IDs so one user request maps to one crew run across services
  • Send the same JSON logs to stdout plus your SIEM or log pipeline
  • Extend the logger with redaction rules for PII and secrets

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides