LangGraph Tutorial (TypeScript): adding audit logs for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-audit-logs-for-beginnerstypescript

This tutorial shows how to add audit logs to a LangGraph workflow in TypeScript by recording every important state transition and model decision. You need this when your agent handles regulated workflows, because you want a durable trail of what happened, when it happened, and which node produced each action.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • langgraph
  • @langchain/openai
  • dotenv
  • An OpenAI API key in OPENAI_API_KEY
  • A basic TypeScript project with "module": "NodeNext" or compatible ESM settings

Install the packages:

npm install langgraph @langchain/openai dotenv
npm install -D typescript tsx @types/node

Step-by-Step

  1. Start with a simple graph state that includes an auditLog array. The key idea is to keep audit data inside the graph state so every node can append to it without extra infrastructure.
import "dotenv/config";
import { Annotation, StateGraph, START, END } from "langgraph";
import { ChatOpenAI } from "@langchain/openai";

const GraphState = Annotation.Root({
  input: Annotation<string>(),
  output: Annotation<string>({
    default: () => "",
    reducer: (_prev, next) => next,
  }),
  auditLog: Annotation<Array<{ node: string; event: string; ts: string }>>({
    default: () => [],
    reducer: (prev, next) => prev.concat(next),
  }),
});
  1. Add a helper that writes audit entries consistently. In production, you want one format for every event so downstream systems can parse it without guessing.
function audit(node: string, event: string) {
  return {
    node,
    event,
    ts: new Date().toISOString(),
  };
}
  1. Create a model node that logs before and after the LLM call. This gives you traceability for both the input being sent and the response coming back.
const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function modelNode(state: typeof GraphState.State) {
  const prompt = `Answer briefly: ${state.input}`;
  const res = await llm.invoke(prompt);

  return {
    output: res.content.toString(),
    auditLog: [
      audit("model", `prompt=${JSON.stringify(prompt)}`),
      audit("model", `response=${JSON.stringify(res.content.toString())}`),
    ],
  };
}
  1. Add a second node that finalizes the response and records the handoff. This is useful when you want to distinguish raw model output from business-approved output.
async function finalizeNode(state: typeof GraphState.State) {
  return {
    output: state.output.trim(),
    auditLog: [audit("finalize", "trimmed output")],
  };
}
  1. Wire the graph together and compile it. The graph itself is small, but the pattern scales because each node only needs to return its own audit events.
const graph = new StateGraph(GraphState)
  .addNode("model", modelNode)
  .addNode("finalize", finalizeNode)
  .addEdge(START, "model")
  .addEdge("model", "finalize")
  .addEdge("finalize", END);

const app = graph.compile();
  1. Run the graph and print both the answer and the audit trail. For beginners, this makes it obvious that the logging is part of state rather than hidden in console output.
const result = await app.invoke({
  input: "What is an audit log in one sentence?",
});

console.log("OUTPUT:", result.output);
console.log("AUDIT LOG:");
for (const entry of result.auditLog) {
  console.log(`${entry.ts} [${entry.node}] ${entry.event}`);
}

Testing It

Run the file with npx tsx your-file.ts after setting OPENAI_API_KEY. You should see a short answer plus multiple audit entries showing the model prompt, model response, and finalization step.

If the output is empty, check that your reducer for auditLog concatenates arrays instead of replacing them. If you only see one log entry, your node probably returned a single object instead of an array under auditLog.

For a quick regression test, change the input text and confirm that the prompt entry in the audit trail changes accordingly. That tells you your logs are tied to actual runtime state, not hardcoded strings.

Next Steps

  • Move auditLog storage out of state and into Postgres or DynamoDB for long-term retention
  • Add a traceId field to every log entry so you can correlate multi-step workflows
  • Use LangGraph checkpointing so you can replay runs and compare state transitions

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides