LangChain Tutorial (TypeScript): adding audit logs for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainadding-audit-logs-for-beginnerstypescript

This tutorial shows you how to add audit logs to a LangChain TypeScript app so every model call, tool call, and user request gets recorded. You need this when you work in regulated environments like banking or insurance, where you must prove what the agent saw, what it did, and when it did it.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • langchain
  • @langchain/openai
  • dotenv
  • An OpenAI API key
  • A writable log destination, like a local file or centralized logging service

Install the packages:

npm install langchain @langchain/openai dotenv
npm install -D typescript ts-node @types/node

Create a .env file:

OPENAI_API_KEY=your_openai_key_here

Step-by-Step

  1. Start with a small LangChain chain and define the audit record shape. Keep the audit payload boring and structured: timestamp, event type, input, output, and metadata. That makes it easy to search later.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

type AuditEvent = {
  timestamp: string;
  eventType: "request" | "response" | "error";
  userId: string;
  traceId: string;
  input?: string;
  output?: string;
  error?: string;
};

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant for customer support."],
  ["human", "{question}"],
]);
  1. Add a simple logger that writes JSON lines to disk. JSONL is a good default because each event is one line, easy to ship to Splunk, Datadog, or CloudWatch later.
import { appendFile } from "node:fs/promises";

async function writeAudit(event: AuditEvent) {
  await appendFile("audit.log", `${JSON.stringify(event)}\n`, "utf8");
}

function makeTraceId() {
  return crypto.randomUUID();
}
  1. Wrap the chain execution so you log before and after the model call. This is the key pattern: log the request first, then log either the response or the error.
import crypto from "node:crypto";

async function runWithAudit(question: string, userId: string) {
  const traceId = makeTraceId();

  await writeAudit({
    timestamp: new Date().toISOString(),
    eventType: "request",
    userId,
    traceId,
    input: question,
  });

  try {
    const chain = prompt.pipe(model);
    const result = await chain.invoke({ question });
    const output = typeof result.content === "string" ? result.content : JSON.stringify(result.content);

    await writeAudit({
      timestamp: new Date().toISOString(),
      eventType: "response",
      userId,
      traceId,
      output,
    });

    return output;
  } catch (err) {
    await writeAudit({
      timestamp: new Date().toISOString(),
      eventType: "error",
      userId,
      traceId,
      error: err instanceof Error ? err.message : String(err),
    });

    throw err;
  }
}
  1. Call the wrapper from a small entrypoint. In production you would pass the real authenticated user ID from your session or JWT claims; for now we hardcode one so you can run it immediately.
async function main() {
  const answer = await runWithAudit(
    "What documents do I need to open a savings account?",
    "user_123"
  );

  console.log("Assistant:", answer);
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. If you want better observability, add tool-level audit events too. The same pattern works for tools like database lookups or policy checks, and those are usually more important than the final answer in regulated workflows.
async function auditToolCall(name: string, input: unknown, userId: string, traceId: string) {
  await writeAudit({
    timestamp: new Date().toISOString(),
    eventType: "request",
    userId,
    traceId,
    input: `${name}: ${JSON.stringify(input)}`,
  });
}

async function auditToolResult(name: string, output: unknown, userId: string, traceId: string) {
  await writeAudit({
    timestamp: new Date().toISOString(),
    eventType: "response",
    userId,
    traceId,
    output: `${name}: ${JSON.stringify(output)}`,
  });
}

Testing It

Run your script with npx ts-node your-file.ts. If everything is wired correctly, you should see an assistant response in stdout and an audit.log file in the project root.

Open audit.log and confirm that each interaction produced at least two entries:

  • one request
  • one response

If something fails, check for:

  • missing OPENAI_API_KEY
  • TypeScript import errors
  • permission issues writing audit.log

A good sanity test is to intentionally break the prompt or disconnect your API key once. You should get an error record in the log with the same traceId as the request.

Next Steps

  • Move from local file logging to structured logging with pino or Winston.
  • Add redaction for PII like account numbers, SSNs, and policy IDs before writing logs.
  • Propagate traceId into downstream services so you can correlate agent actions across your stack.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides