AutoGen Tutorial (TypeScript): adding audit logs for intermediate developers
This tutorial shows you how to add structured audit logs to an AutoGen TypeScript agent workflow, so you can trace who did what, when, and with which model output. You need this when building agent systems for regulated environments, incident review, or simply when debugging multi-step conversations that touch external systems.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeortsx - •AutoGen for TypeScript installed
- •An OpenAI API key in
OPENAI_API_KEY - •A terminal and a text editor
- •Basic familiarity with
AssistantAgent,UserProxyAgent, andrun()in AutoGen
Install the packages if you do not already have them:
npm install @autogenai/autogen openai
npm install -D typescript tsx @types/node
Step-by-Step
- •Start by creating a small audit logger that writes JSON lines to stdout. This keeps the implementation simple and production-friendly because JSONL is easy to ship into CloudWatch, Datadog, or Elasticsearch later.
// audit.ts
export type AuditEvent = {
ts: string;
runId: string;
actor: string;
event: string;
data?: Record<string, unknown>;
};
export class AuditLogger {
log(event: AuditEvent) {
process.stdout.write(JSON.stringify(event) + "\n");
}
}
- •Next, create your agents and wrap the conversation with explicit audit events. The key pattern is to log before and after the agent run, plus any intermediate messages you care about for traceability.
// main.ts
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";
import OpenAI from "openai";
import { AuditLogger } from "./audit.js";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const audit = new AuditLogger();
const runId = crypto.randomUUID();
const assistant = new AssistantAgent({
name: "assistant",
modelClient: client,
systemMessage: "You are a concise assistant.",
});
const user = new UserProxyAgent({
name: "user",
});
- •Now add a helper that records the input and output of each step. In real systems, this is where you would also attach correlation IDs, tenant IDs, or case numbers so the logs can be tied back to a business record.
async function auditedRun(prompt: string) {
audit.log({
ts: new Date().toISOString(),
runId,
actor: "user",
event: "prompt_submitted",
data: { prompt },
});
const result = await user.run(
assistant,
prompt,
{
maxTurns: 2,
}
);
audit.log({
ts: new Date().toISOString(),
runId,
actor: "assistant",
event: "run_completed",
data: { result },
});
return result;
}
- •If you want intermediate visibility, capture message events from the conversation instead of only logging the final result. This is the part most teams miss; final output alone does not tell you why the agent made a bad decision or called the wrong tool.
async function main() {
audit.log({
ts: new Date().toISOString(),
runId,
actor: "system",
event: "run_started",
});
const result = await auditedRun("Summarize why audit logs matter in agent workflows.");
console.log(result);
audit.log({
ts: new Date().toISOString(),
runId,
actor: "system",
event: "run_finished",
});
}
main().catch((err) => {
audit.log({
ts: new Date().toISOString(),
runId,
actor: "system",
event: "run_failed",
data: { message: err instanceof Error ? err.message : String(err) },
});
process.exit(1);
});
- •Finally, make the logs safer by redacting sensitive fields before writing them out. For banking and insurance work, do not dump raw prompts or tool payloads unless you have a clear retention policy and access controls.
function redact(input: string) {
return input
.replace(/\b\d{16}\b/g, "[REDACTED_CARD]")
.replace(/\b\d{3}-\d{2}-\d{4}\b/g, "[REDACTED_SSN]");
}
audit.log({
ts: new Date().toISOString(),
runId,
actor: "user",
event: "prompt_submitted",
data: { prompt: redact("My card is 4111111111111111") },
});
Testing It
Run the script with OPENAI_API_KEY set in your environment and watch stdout for JSON lines. You should see run_started, prompt_submitted, run_completed, and run_finished events with the same runId.
If the agent fails, verify that the failure is captured by the run_failed event rather than crashing silently. Also check that each log line is valid JSON and includes a timestamp in ISO format.
For a quick sanity check, pipe output through jq:
OPENAI_API_KEY=your_key_here npx tsx main.ts | jq .
If you plan to store these logs in production, confirm that redaction works by sending test prompts containing fake card numbers or SSNs.
Next Steps
- •Add tool-call auditing so every external action gets its own log event.
- •Send JSONL output to a real sink like Loki, CloudWatch Logs, or Azure Monitor.
- •Add per-tenant fields like
customerId,caseId, andrequestIdso audits are searchable across services.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit