LangChain Tutorial (TypeScript): adding audit logs for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-audit-logs-for-intermediate-developerstypescript

This tutorial shows you how to add audit logs to a LangChain TypeScript app so every prompt, model output, tool call, and chain error is captured in a structured way. You need this when you’re building agent workflows for regulated environments and need traceability for debugging, compliance reviews, or incident response.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or tsx
  • These packages:
    • langchain
    • @langchain/openai
    • zod
    • dotenv
  • An OpenAI API key
  • Basic familiarity with LangChain runnables, prompts, and chat models

Step-by-Step

  1. Start with a minimal TypeScript setup and install the dependencies. I’m using OpenAI here because the LangChain integration is straightforward, but the audit logging pattern works with any chat model provider.
npm init -y
npm install langchain @langchain/openai zod dotenv
npm install -D typescript tsx @types/node
  1. Create a small logger that writes audit events as JSON lines. This keeps the log format easy to ship into CloudWatch, Datadog, Splunk, or a SIEM later.
// audit-log.ts
import { appendFile } from "node:fs/promises";

export type AuditEvent = {
  timestamp: string;
  type: "prompt" | "response" | "tool" | "error";
  runId: string;
  data: Record<string, unknown>;
};

export async function writeAuditEvent(event: AuditEvent) {
  await appendFile("audit.log", JSON.stringify(event) + "\n", "utf8");
}
  1. Build your chain and wrap each important boundary with logging. The key idea is simple: log input before the model call, log output after it returns, and log failures in the catch path.
// index.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
import { writeAuditEvent } from "./audit-log";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise support assistant."],
  ["human", "{question}"],
]);

const chain = prompt.pipe(model);

async function run() {
  const runId = crypto.randomUUID();
  const question = "Summarize the policy for password resets.";

  await writeAuditEvent({
    timestamp: new Date().toISOString(),
    type: "prompt",
    runId,
    data: { question },
  });

  try {
    const result = await chain.invoke({ question });

    await writeAuditEvent({
      timestamp: new Date().toISOString(),
      type: "response",
      runId,
      data: { content: result.content },
    });

    console.log(result.content);
  } catch (error) {
    await writeAuditEvent({
      timestamp: new Date().toISOString(),
      type: "error",
      runId,
      data: {
        message: error instanceof Error ? error.message : String(error),
      },
    });

    throw error;
  }
}

run();
  1. If your workflow includes tools, log those separately too. In production systems, tool calls are usually the most important part of the audit trail because they show what external action was requested and with what parameters.
// tool-example.ts
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
import { writeAuditEvent } from "./audit-log";

export const lookupCustomerTool = new DynamicStructuredTool({
  name: "lookup_customer",
  description: "Look up a customer record by ID.",
  schema: z.object({
    customerId: z.string(),
  }),
  func: async ({ customerId }) => {
    await writeAuditEvent({
      timestamp: new Date().toISOString(),
      type: "tool",
      runId: crypto.randomUUID(),
      data: { toolName: "lookup_customer", customerId },
    });

    return JSON.stringify({
      customerId,
      status: "active",
      tier: "gold",
    });
  },
});
  1. If you want cleaner production code, centralize audit logging behind a helper that accepts the LangChain run context. That way you can reuse it across chains instead of scattering file writes through business logic.
// audit-helper.ts
import { writeAuditEvent } from "./audit-log";

export async function logChainEvent(
  type: "prompt" | "response" | "tool" | "error",
  runId: string,
  data: Record<string, unknown>
) {
  return writeAuditEvent({
    timestamp: new Date().toISOString(),
    type,
    runId,
    data,
  });
}
  1. Run the app and inspect the generated log file. You should see one JSON object per line with enough detail to reconstruct what happened without dumping your whole application state.
npx tsx index.ts
cat audit.log

Testing It

Run the script once with a valid API key and confirm that audit.log gets created in the project root. You should see at least two entries for a successful request: one prompt event and one response event.

Then force an error by removing your API key or sending an invalid request shape, and confirm an error event is written before the process exits. If you added tools, trigger one of them and verify that its input parameters are logged as a separate tool event.

For a real system, check that every entry includes a stable runId, an ISO timestamp, and only the fields you actually want in compliance logs. Don’t log secrets, full PII payloads, or raw prompts if your policy forbids them.

Next Steps

  • Add redaction rules before writing logs so sensitive fields like account numbers and tokens never hit disk.
  • Replace file-based logging with a structured logger like Pino or Winston and ship events to your observability stack.
  • Wire LangChain callbacks into your chains so you can capture deeper lifecycle events without manually wrapping every invocation.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides