AutoGen Tutorial (TypeScript): adding audit logs for beginners
This tutorial shows you how to add audit logging to an AutoGen TypeScript agent workflow. You’ll capture who did what, when it happened, and what the agent actually sent or received, which is the minimum you need for debugging, compliance, and incident reviews.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project with
ts-nodeor a build step already set up - •
@autogenai/autogeninstalled - •
dotenvinstalled for environment variables - •An OpenAI API key in
.env - •Basic familiarity with AutoGen agents and
run()/ chat execution
Install the packages:
npm install @autogenai/autogen dotenv
npm install -D typescript ts-node @types/node
Step-by-Step
- •Start by creating a small audit logger. Keep it dead simple: write JSON lines to a file so every event is easy to grep, ship to a log pipeline, or replay later. For beginner setups, file-based audit logs are enough and much easier to verify than database writes.
import fs from "node:fs";
import path from "node:path";
type AuditEvent = {
ts: string;
actor: string;
action: string;
payload: unknown;
};
const logFile = path.join(process.cwd(), "audit.log");
export function auditLog(event: AuditEvent): void {
fs.appendFileSync(logFile, `${JSON.stringify(event)}\n`, "utf8");
}
- •Load your API key and create the model client. This keeps secrets out of source control and gives the agent something real to talk to. If you already use
.env, this fits directly into that setup.
import "dotenv/config";
import { OpenAIChatCompletionClient } from "@autogenai/autogen";
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("OPENAI_API_KEY is missing");
}
export const modelClient = new OpenAIChatCompletionClient({
apiKey,
model: "gpt-4o-mini",
});
- •Create an assistant agent and log the important lifecycle events around it. The trick is not to log only errors; log start, input, output, and completion so you can reconstruct the full flow later. In production systems, this is what makes audits useful instead of decorative.
import { AssistantAgent } from "@autogenai/autogen";
import { auditLog } from "./audit";
import { modelClient } from "./model";
const agent = new AssistantAgent({
name: "support_agent",
modelClient,
systemMessage: "You are a concise support assistant.",
});
export async function runSupportRequest(userId: string, prompt: string) {
auditLog({
ts: new Date().toISOString(),
actor: userId,
action: "agent_request_started",
payload: { agent: "support_agent", prompt },
});
const result = await agent.run([{ role: "user", content: prompt }]);
auditLog({
ts: new Date().toISOString(),
actor: userId,
action: "agent_request_completed",
payload: { agent: "support_agent", result },
});
return result;
}
- •If you want better traceability, wrap every tool call too. That gives you a clean chain from user request to tool execution to final response, which matters when an agent reads customer data or triggers side effects. This pattern also helps when someone asks, “Why did the agent say that?”
import { auditLog } from "./audit";
export async function auditedToolCall<TArgs extends object, TResult>(
userId: string,
toolName: string,
args: TArgs,
fn: (args: TArgs) => Promise<TResult>,
): Promise<TResult> {
auditLog({
ts: new Date().toISOString(),
actor: userId,
action: "tool_call_started",
payload: { toolName, args },
});
const result = await fn(args);
auditLog({
ts: new Date().toISOString(),
actor: userId,
action: "tool_call_completed",
payload: { toolName, result },
});
return result;
}
- •Put it together in a runnable entry point. This example calls the agent once and prints the response while also writing audit records to disk. Keep the input small at first so you can inspect the logs without noise.
import { runSupportRequest } from "./agent";
async function main() {
const userId = "user_123";
const prompt = "Summarize our refund policy in one paragraph.";
const result = await runSupportRequest(userId, prompt);
console.log(JSON.stringify(result, null, 2));
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
Testing It
Run your script and confirm two things happen at once: you get an assistant response in the terminal, and an audit.log file appears in your project root. Open that file and check that each line is valid JSON with timestamps, actor IDs, actions, and payloads.
Then trigger a second request with a different prompt or user ID. You should see a second pair of agent_request_started and agent_request_completed entries appended in order.
If you wrapped any tools, call one of them and verify its tool_call_started and tool_call_completed events show up as well. That’s your proof that the audit trail covers both orchestration and side effects.
Next Steps
- •Add request IDs and correlation IDs so one user interaction can be traced across multiple services
- •Replace file logging with structured logging to Elasticsearch, Loki, or CloudWatch
- •Redact sensitive fields before writing payloads to disk or sending them to your SIEM
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit