CrewAI Tutorial (TypeScript): adding audit logs for beginners

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-audit-logs-for-beginnerstypescript

This tutorial shows you how to add audit logs to a CrewAI workflow in TypeScript so every important agent action is recorded with a timestamp, actor, and event type. You need this when you’re building systems for regulated environments like banking or insurance, where you need traceability for prompts, tool calls, and final outputs.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or compiled output
  • CrewAI TypeScript package installed
  • An LLM API key configured in your environment
  • Basic familiarity with:
    • agents
    • tasks
    • crews
  • A writable log destination:
    • local file for development
    • database or log pipeline for production

Install the packages:

npm install @crewai/core zod dotenv
npm install -D typescript ts-node @types/node

Set your environment variables:

export OPENAI_API_KEY="your_key_here"

Step-by-Step

  1. Create a small audit logger first. Keep it boring: one function that writes structured JSON lines to disk. That makes it easy to grep, ship to Splunk, or ingest into OpenSearch later.
// audit.ts
import { appendFile } from "node:fs/promises";

export type AuditEvent = {
  timestamp: string;
  actor: string;
  action: string;
  status: "started" | "completed" | "failed";
  details?: Record<string, unknown>;
};

const AUDIT_FILE = "./audit.log";

export async function writeAuditLog(event: AuditEvent): Promise<void> {
  await appendFile(AUDIT_FILE, `${JSON.stringify(event)}\n`, "utf8");
}
  1. Define your agent and task as usual, then wrap the execution with audit events. The trick is not changing CrewAI internals; just log before and after the crew runs.
// index.ts
import "dotenv/config";
import { Agent, Task, Crew } from "@crewai/core";
import { writeAuditLog } from "./audit";

async function main() {
  const analyst = new Agent({
    role: "Risk Analyst",
    goal: "Summarize transaction anomalies",
    backstory: "You review suspicious account activity for compliance teams.",
    verbose: true,
  });

  const task = new Task({
    description: "Analyze the following transaction summary and flag anomalies.",
    expectedOutput: "A concise list of anomalies and recommended follow-up actions.",
    agent: analyst,
  });

  const crew = new Crew({
    agents: [analyst],
    tasks: [task],
    verbose: true,
  });

  await writeAuditLog({
    timestamp: new Date().toISOString(),
    actor: "system",
    action: "crew.run",
    status: "started",
    details: { crewName: "risk-review" },
  });

  const result = await crew.kickoff();

  await writeAuditLog({
    timestamp: new Date().toISOString(),
    actor: "system",
    action: "crew.run",
    status: "completed",
    details: { crewName: "risk-review", result: String(result) },
  });

  console.log(result);
}

main().catch(async (error) => {
  await writeAuditLog({
    timestamp: new Date().toISOString(),
    actor: "system",
    action: "crew.run",
    status: "failed",
    details: { message: error instanceof Error ? error.message : String(error) },
  });

  process.exit(1);
});
  1. Log tool usage too if your agents call external systems. In regulated workflows, the audit trail should show what was requested, by whom, and whether it succeeded.
// tools.ts
import { z } from "zod";
import { writeAuditLog } from "./audit";

export async function auditedLookup(accountId: string): Promise<string> {
  await writeAuditLog({
    timestamp: new Date().toISOString(),
    actor: "Risk Analyst",
    action: "tool.lookupAccount",
    status: "started",
    details: { accountId },
  });

  try {
    const result = `Account ${accountId}: no open alerts`;

    await writeAuditLog({
      timestamp: new Date().toISOString(),
      actor: "Risk Analyst",
      action: "tool.lookupAccount",
      status: "completed",
      details: { accountId, result },
    });

    return result;
  } catch (error) {
    await writeAuditLog({
      timestamp: new Date().toISOString(),
      actor: "Risk Analyst",
      action": "tool.lookupAccount",
      status": "failed",
      details": { accountId, message": error instanceof Error ? error.message : String(error) },
    });
    throw error;
  }
}

export const LookupInputSchema = z.object({
  accountId": z.string().min(1),
});
  1. If you want more granular logs, emit an event per task boundary and include a correlation ID. That gives you a single thread across agent runs, tool calls, and downstream services.
// correlation.ts
import crypto from "node:crypto";
import { writeAuditLog } from "./audit";

export function createCorrelationId(): string {
  return crypto.randomUUID();
}

export async function logTaskBoundary(
  correlationId": string,
  taskName": string,
  status": "started" | "completed" | "failed"
): Promise<void> {
  await writeAuditLog({
    timestamp": new Date().toISOString(),
    actor": correlationId,
    action": `task.${taskName}`,
    status,
   details": { correlationId },
 });
}
  1. Run the workflow with one correlation ID and keep the logs machine-readable. In production, send the same JSON payload to stdout plus your log collector; don’t invent separate formats.
// run.ts
import { Agent, Task, Crew } from "@crewai/core";
import { createCorrelationId, logTaskBoundary } from "./correlation";

async function run() {
  const correlationId = createCorrelationId();

  const agent = new Agent({
   role": "Compliance Reviewer",
   goal": "Review customer-facing text for policy issues",
   backstory": "You work in a financial compliance team.",
   verbose": true,
 });

 const task = new Task({
   description": `Review this statement for compliance issues. Correlation ID ${correlationId}`,
   expectedOutput": "A list of policy concerns and safer wording suggestions.",
   agent,
 });

 const crew = new Crew({ agents":[agent], tasks":[task], verbose:true });

 await logTaskBoundary(correlationId, “compliance-review”, “started”);
 const output = await crew.kickoff();
 await logTaskBoundary(correlationId, “compliance-review”, “completed”);

 console.log(output);
}

run();

Testing It

Run your script once and check that audit.log contains newline-delimited JSON entries. You should see at least a started event and a completed event with timestamps in ISO format.

Then force a failure by breaking your API key or throwing an error inside the script. Confirm that the failed audit event is written before the process exits.

For production readiness, validate three things:

  • every record has a timestamp
  • every record has an action name
  • every record can be parsed as JSON without custom logic

Next Steps

  • Add request IDs and user IDs from your web layer so audits connect back to real users.
  • Ship logs to a centralized system like OpenSearch, Datadog, or CloudWatch.
  • Add redaction rules so prompts and outputs don’t leak sensitive data into logs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides