LangGraph Tutorial (TypeScript): adding audit logs for advanced developers
This tutorial shows how to add durable audit logs to a LangGraph TypeScript workflow without polluting your business logic. You’ll capture every node transition, input, output, and error in a structured log that can be shipped to your observability stack or stored for compliance review.
What You'll Need
- •Node.js 20+
- •TypeScript 5+
- •
langgraphpackage - •
zodpackage for structured state validation - •An OpenAI API key if you want to run an LLM node
- •A logging destination:
- •console for local testing
- •file, Datadog, OpenSearch, or Postgres in production
Install the packages:
npm install langgraph zod @langchain/openai
npm install -D typescript tsx @types/node
Step-by-Step
- •Start with a state shape that includes both business data and an audit trail. Keep the audit data in the graph state so every node can append events without relying on global variables.
import { z } from "zod";
export const GraphState = z.object({
input: z.string(),
output: z.string().optional(),
auditLog: z.array(
z.object({
ts: z.string(),
node: z.string(),
event: z.enum(["start", "end", "error"]),
payload: z.record(z.any()),
})
),
});
export type GraphStateType = z.infer<typeof GraphState>;
- •Create a small helper that writes one audit event at a time. This keeps the logging format consistent and makes it easy to swap console logging for a real sink later.
type AuditEvent = GraphStateType["auditLog"][number];
export function createAuditEvent(
node: string,
event: AuditEvent["event"],
payload: Record<string, unknown>
): AuditEvent {
return {
ts: new Date().toISOString(),
node,
event,
payload,
};
}
- •Build your nodes as plain async functions and append audit records inside them. The important part is that each node returns only the fields it owns, so LangGraph can merge state cleanly.
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
export async function classifyNode(state: GraphStateType) {
const start = createAuditEvent("classifyNode", "start", { input: state.input });
const res = await model.invoke([
["system", "Classify the user request into one short sentence."],
["user", state.input],
]);
return {
output: String(res.content),
auditLog: [
start,
createAuditEvent("classifyNode", "end", {
output: String(res.content),
}),
],
};
}
- •Add error handling at the node boundary so failures are logged before they escape the graph. This is where advanced teams usually get bitten, because successful paths are easy and failure paths are what auditors ask for.
export async function auditedNode(state: GraphStateType) {
try {
const start = createAuditEvent("auditedNode", "start", { inputLength: state.input.length });
const result = await model.invoke([
["system", "Return a concise answer."],
["user", state.input],
]);
return {
output: String(result.content),
auditLog: [
start,
createAuditEvent("auditedNode", "end", {
outputLength: String(result.content).length,
}),
],
};
} catch (err) {
return {
auditLog: [
createAuditEvent("auditedNode", "error", {
message: err instanceof Error ? err.message : String(err),
}),
],
};
}
}
- •Wire the nodes into a LangGraph workflow and print the final audit trail after execution. This example uses a minimal linear graph so you can focus on the logging pattern before adding branches or retries.
import { StateGraph, START, END } from "@langchain/langgraph";
const WorkflowState = GraphState;
const graph = new StateGraph(WorkflowState)
.addNode("auditedNode", auditedNode)
.addEdge(START, "auditedNode")
.addEdge("auditedNode", END)
.compile();
async function main() {
const result = await graph.invoke({
input: "Summarize this customer complaint in one sentence.",
auditLog: [],
});
console.log("OUTPUT:", result.output);
console.log("AUDIT LOG:");
for (const entry of result.auditLog) {
console.log(JSON.stringify(entry));
}
}
main().catch(console.error);
- •If you want production-grade logs, export each event as JSON and ship it outside the process. In practice, you should write to stdout in containers and let your platform collect logs, or push the same structure into Kafka/Postgres if you need queryable retention.
export function emitAudit(event: AuditEvent) {
process.stdout.write(`${JSON.stringify(event)}\n`);
}
export function appendAndEmit(
currentLog: AuditEvent[],
node: string,
eventType: AuditEvent["event"],
payload: Record<string, unknown>
) {
const event = createAuditEvent(node, eventType, payload);
emitAudit(event);
return [...currentLog, event];
}
Testing It
Run the file with tsx and confirm you get both an OUTPUT line and multiple JSON audit entries. The important checks are that each run produces timestamps, node names, and deterministic event ordering for start before end.
Then force a failure by giving the model invalid credentials or disconnecting network access. You should see an error event captured in the returned state or emitted to stdout before the exception bubbles up.
If you’re integrating this into an existing graph, verify that every branch appends its own events and that merged logs preserve order across nodes. For compliance use cases, also confirm that sensitive fields are redacted before they enter payload.
Next Steps
- •Add middleware-style wrappers around every node so auditing is automatic instead of hand-written.
- •Persist audit events to Postgres with a schema like
(run_id, ts, node, event, payload). - •Add correlation IDs from HTTP requests so one user action maps to one graph run end-to-end
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit