AutoGen Tutorial (TypeScript): adding audit logs for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogenadding-audit-logs-for-advanced-developerstypescript

This tutorial shows how to add durable audit logs to an AutoGen TypeScript agent workflow, capturing every model call, tool call, and message transition. You need this when you’re building systems for regulated environments where you must prove what the agent saw, decided, and returned.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with "type": "module" enabled or ESM-compatible config
  • @autogenai/autogen installed
  • openai installed
  • An OpenAI API key in OPENAI_API_KEY
  • A writable log directory, like ./audit
  • Basic familiarity with AutoGen agents and tool execution

Step-by-Step

  1. Start by installing the dependencies and setting up a minimal project. I’m using plain Node file output for audit logs because it’s simple, deterministic, and easy to ship into SIEM later.
npm init -y
npm install @autogenai/autogen openai
npm install -D typescript tsx @types/node
mkdir -p src audit
  1. Create a small audit logger that writes JSON Lines. This format is easy to ingest, append-only, and works well when you need one record per event.
// src/audit.ts
import { appendFile } from "node:fs/promises";

export type AuditEvent = {
  ts: string;
  runId: string;
  actor: string;
  type: string;
  payload: unknown;
};

export async function writeAudit(event: AuditEvent) {
  const line = `${JSON.stringify(event)}\n`;
  await appendFile("./audit/autogen-audit.jsonl", line, "utf8");
}
  1. Build your AutoGen agents and wire audit events into the message flow. The important part is that you log both the request path and the response path so you can reconstruct the conversation later.
// src/index.ts
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";
import { OpenAIChatCompletionClient } from "@autogenai/autogen-ext/models/openai";
import { writeAudit } from "./audit.js";

const client = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const runId = crypto.randomUUID();

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient: client,
});

const user = new UserProxyAgent({
  name: "user",
});

await writeAudit({
  ts: new Date().toISOString(),
  runId,
  actor: "system",
  type: "run_started",
  payload: { model: "gpt-4o-mini" },
});
  1. Add a wrapper around each send operation so every user prompt and agent reply gets logged. In production, this is where you also attach tenant IDs, case IDs, or policy references.
async function auditedSend(prompt: string) {
  await writeAudit({
    ts: new Date().toISOString(),
    runId,
    actor: "user",
    type: "message_sent",
    payload: { prompt },
  });

  const result = await user.send({
    recipient: assistant,
    message: prompt,
  });

  await writeAudit({
    ts: new Date().toISOString(),
    runId,
    actor: "assistant",
    type: "message_received",
    payload: result,
  });

  return result;
}

await auditedSend("Summarize the risk of approving a loan with missing income docs.");
  1. If you use tools, log before and after execution. That gives you a clean trail for external side effects like database reads, policy checks, or document retrieval.
async function auditedToolCall(name: string, input: Record<string, unknown>) {
  await writeAudit({
    ts: new Date().toISOString(),
    runId,
    actor: "tool",
    type: "tool_start",
    payload: { name, input },
  });

  const output = { approved: false, reason: "missing_income_docs" };

  await writeAudit({
    ts: new Date().toISOString(),
    runId,
    actor: "tool",
    type: "tool_end",
    payload: { name, output },
  });

  return output;
}

await auditedToolCall("risk_check", { applicantId: "A123" });
  1. Run the script with your API key set and verify that the log file is appended on every execution. If you want stronger guarantees later, replace local file writes with a centralized sink like Kafka, CloudWatch, or Azure Monitor.
OPENAI_API_KEY=your_key_here npx tsx src/index.ts
cat audit/autogen-audit.jsonl

Testing It

Run the script twice and confirm that audit/autogen-audit.jsonl contains multiple JSON objects separated by newlines. Each record should have a timestamp, a stable runId, an actor label, and a payload you can parse back into structured data.

Then intentionally change the prompt and verify that the logged prompt changes too. If you’re using tools in your real app, confirm that every tool invocation creates both a start and end event even when the tool fails.

For a stronger check, pipe the file through jq:

cat audit/autogen-audit.jsonl | jq .

If that succeeds without parse errors, your log format is clean enough for downstream ingestion.

Next Steps

  • Add redaction for PII fields before writing audit records.
  • Store runId, tenant ID, and policy version together so compliance teams can trace decisions.
  • Replace local file logging with an event pipeline backed by your observability stack.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides