LangGraph Tutorial (TypeScript): adding audit logs for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-audit-logs-for-intermediate-developerstypescript

This tutorial shows you how to add audit logs to a LangGraph workflow in TypeScript, so every node execution, state change, and final result is recorded. You need this when you’re building agent systems that must be explainable, traceable, or reviewable by compliance, support, or internal ops.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • @langchain/langgraph
  • @langchain/openai
  • dotenv
  • An OpenAI API key in OPENAI_API_KEY
  • A project set up with "type": "module" in package.json

Install the packages:

npm install @langchain/langgraph @langchain/openai dotenv
npm install -D typescript tsx @types/node

Step-by-Step

  1. Start with a small graph and define an audit event shape.

You want the audit log format locked down before wiring it into the graph. Keep it simple: timestamp, node name, event type, and payload.

export type AuditEvent = {
  ts: string;
  node: string;
  type: "start" | "end" | "error";
  payload: unknown;
};

export type AgentState = {
  input: string;
  draft?: string;
  final?: string;
  audit: AuditEvent[];
};
  1. Create a reusable audit helper that appends events into state.

This keeps logging logic out of your business nodes. In production, you’d swap this for a database or event stream, but the pattern stays the same.

import { Annotation } from "@langchain/langgraph";

export const StateAnnotation = Annotation.Root({
  input: Annotation<string>(),
  draft: Annotation<string | undefined>(),
  final: Annotation<string | undefined>(),
  audit: Annotation<AuditEvent[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

export function makeAuditEvent(
  node: string,
  type: AuditEvent["type"],
  payload: unknown
): AuditEvent {
  return {
    ts: new Date().toISOString(),
    node,
    type,
    payload,
  };
}
  1. Build nodes that emit audit entries before and after work.

Each node returns both its business output and a new audit entry. This gives you an immutable trail of what happened at each step without mutating shared state.

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
});

export async function draftNode(state: typeof StateAnnotation.State) {
  const start = makeAuditEvent("draftNode", "start", { inputLength: state.input.length });
  const response = await model.invoke([
    ["system", "Write a concise internal note."],
    ["human", state.input],
  ]);

  const draft = response.content.toString();
  const end = makeAuditEvent("draftNode", "end", { draft });

  return {
    draft,
    audit: [start, end],
  };
}

export async function finalizeNode(state: typeof StateAnnotation.State) {
  const start = makeAuditEvent("finalizeNode", "start", { hasDraft: Boolean(state.draft) });
  const final = `Final note:\n${state.draft ?? ""}`;
  const end = makeAuditEvent("finalizeNode", "end", { finalLength: final.length });

  return {
    final,
    audit: [start, end],
  };
}
  1. Wire the nodes into a LangGraph workflow.

The graph only needs one path here, but the same logging pattern works when you add branching, retries, or tool calls. The important part is that every node returns audit entries as part of state updates.

import { StateGraph, START, END } from "@langchain/langgraph";

const graph = new StateGraph(StateAnnotation)
  .addNode("draftNode", draftNode)
  .addNode("finalizeNode", finalizeNode)
  .addEdge(START, "draftNode")
  .addEdge("draftNode", "finalizeNode")
  .addEdge("finalizeNode", END);

export const app = graph.compile();
  1. Run the graph and print the audit trail.

This is where you verify that intermediate steps are being captured correctly. In a real service, this is also where you’d persist the audit array to your database or forward it to your logging pipeline.

import "dotenv/config";

async function main() {
  const result = await app.invoke({
    input: "Summarize the customer complaint for the case manager.",
    audit: [],
  });

  console.log("Final output:");
  console.log(result.final);

  console.log("\nAudit log:");
  for (const entry of result.audit) {
    console.log(JSON.stringify(entry));
  }
}

main().catch((err) => {
  console.error(err);
});

Testing It

Run the file with tsx and confirm you get both a final answer and multiple audit entries in order. You should see at least four events total: start and end for each node. If you only see the final output, your reducer is wrong or your nodes are not returning audit updates. If you want stronger validation, assert that every event has a timestamp and that node matches one of your graph node names.

Next Steps

  • Add a persistence layer for audits using Postgres or DynamoDB.
  • Include runId, userId, and request metadata in every event.
  • Use LangGraph callbacks or middleware when you need cross-cutting logging across many graphs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides