LangChain Tutorial (TypeScript): adding audit logs for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-audit-logs-for-advanced-developerstypescript

This tutorial shows how to add durable audit logs to a LangChain TypeScript app so every model call, tool call, and final answer can be traced later. You need this when you’re building workflows for regulated environments, incident review, or any system where “what happened?” matters more than “it worked.”

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key
  • Packages:
    • langchain
    • @langchain/openai
    • dotenv
    • zod
  • A place to write logs:
    • local JSONL file for development
    • database or object storage for production

Step-by-Step

  1. Start with a minimal LangChain setup and a structured audit record.
    The key idea is to log before and after the chain runs, not just the final output.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

type AuditEvent = {
  timestamp: string;
  runId: string;
  eventType: "request" | "response" | "error";
  input?: string;
  output?: string;
  error?: string;
};

const auditLog: AuditEvent[] = [];

function writeAudit(event: AuditEvent) {
  auditLog.push(event);
  console.log(JSON.stringify(event));
}

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise banking assistant."],
  ["human", "{question}"],
]);
  1. Wrap the chain execution so every request gets a stable runId.
    In production, this ID should come from your request context or trace ID, not from random local state.
import { v4 as uuidv4 } from "uuid";

async function runWithAudit(question: string) {
  const runId = uuidv4();

  writeAudit({
    timestamp: new Date().toISOString(),
    runId,
    eventType: "request",
    input: question,
  });

  try {
    const chain = prompt.pipe(model);
    const result = await chain.invoke({ question });

    writeAudit({
      timestamp: new Date().toISOString(),
      runId,
      eventType: "response",
      output: result.content.toString(),
    });

    return result.content.toString();
  } catch (error) {
    writeAudit({
      timestamp: new Date().toISOString(),
      runId,
      eventType: "error",
      error: error instanceof Error ? error.message : String(error),
    });
    throw error;
  }
}
  1. Add tool-level logging if your chain uses external actions.
    This is where most audit gaps happen: people log the prompt and response but forget the tool call that changed state.
import { z } from "zod";
import { DynamicStructuredTool } from "@langchain/core/tools";

const transferSchema = z.object({
  accountId: z.string(),
  amount: z.number().positive(),
});

const transferTool = new DynamicStructuredTool({
  name: "transfer_funds",
  description: "Records a funds transfer request",
  schema: transferSchema,
  func: async ({ accountId, amount }) => {
    const runId = uuidv4();

    writeAudit({
      timestamp: new Date().toISOString(),
      runId,
      eventType: "request",
      input: `tool=transfer_funds accountId=${accountId} amount=${amount}`,
    });

    const result = `Transfer queued for ${accountId}: $${amount}`;

    writeAudit({
      timestamp: new Date().toISOString(),
      runId,
      eventType: "response",
      output: result,
    });

    return result;
  },
});
  1. Use the tool in an agent and keep the same logging pattern around the agent call.
    The important part is that your audit trail captures both LLM reasoning and side effects in one place.
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";

async function main() {
  const tools = [transferTool];
  const agentPrompt = ChatPromptTemplate.fromMessages([
    ["system", "You are a banking operations assistant."],
    ["human", "{input}"],
    ["placeholder", "{agent_scratchpad}"],
  ]);

  const agent = await createOpenAIFunctionsAgent({
    llm: model,
    tools,
    prompt: agentPrompt,
  });

  const executor = new AgentExecutor({
    agent,
    tools,
    verbose: false,
  });

  const answer = await executor.invoke({
    input: "Queue a transfer of $125 to account A-9912.",
  });

  console.log(answer.output);
}

main().catch(console.error);
  1. Persist the audit log as JSONL so it can be shipped to storage later.
    JSONL is the right default because each line is append-only, easy to parse, and works well with log pipelines.
import { appendFileSync } from "node:fs";

function persistAudit(event: AuditEvent) {
  appendFileSync("audit-log.jsonl", `${JSON.stringify(event)}\n`);
}

function writePersistentAudit(event: AuditEvent) {
  auditLog.push(event);
  persistAudit(event);
}

Testing It

Run the script with a simple prompt and confirm you see three things in order: a request event, a response event, and any tool events if your flow uses tools. Then open audit-log.jsonl and verify each line is valid JSON with a consistent runId for the same request path.

Test failure handling too by forcing an invalid API key or an invalid tool input. You want an error event written even when the chain throws, because missing failure logs are what make audits useless.

If you’re using this in an HTTP service, send two requests concurrently and confirm their runIds do not collide. That’s the real test for whether your logging survives production traffic.

Next Steps

  • Add redaction before writing logs so secrets, PII, and account numbers don’t land in plain text.
  • Move from local JSONL to OpenTelemetry spans or a centralized log sink like CloudWatch, Datadog, or Elasticsearch.
  • Add correlation IDs across your API layer, LangChain runs, and downstream service calls so one user request maps to one trace end-to-end.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides