LangChain Tutorial (TypeScript): mocking LLM calls in tests for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainmocking-llm-calls-in-tests-for-advanced-developerstypescript

This tutorial shows how to unit test LangChain-based TypeScript code without hitting a real model API. You’ll mock LLM calls at the boundary, keep your tests deterministic, and still verify that your prompt wiring, parsing, and retry logic behave correctly.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • A test runner: vitest or jest
  • LangChain packages:
    • langchain
    • @langchain/openai
    • @langchain/core
  • A mock library:
    • vitest mocking utilities or jest.fn()
  • Optional API key if you want to run a real integration test later:
    • OPENAI_API_KEY

Step-by-Step

  1. Start with a small LangChain function that calls an LLM through a single seam.
    The key here is not to scatter model construction across your app; keep it in one place so tests can replace it cleanly.
// src/summarize.ts
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";

export async function summarizeText(text: string): Promise<string> {
  const prompt = PromptTemplate.fromTemplate(
    "Summarize this text in one sentence:\n\n{text}"
  );

  const llm = new ChatOpenAI({
    model: "gpt-4o-mini",
    temperature: 0,
  });

  const chain = prompt.pipe(llm);
  const result = await chain.invoke({ text });

  return result.content.toString();
}
  1. Add a test that mocks the model constructor and returns a fake response.
    This keeps the test focused on your chain logic instead of network behavior.
// src/summarize.test.ts
import { describe, expect, it, vi, beforeEach } from "vitest";
import { AIMessage } from "@langchain/core/messages";

const invokeMock = vi.fn();

vi.mock("@langchain/openai", () => ({
  ChatOpenAI: vi.fn().mockImplementation(() => ({
    invoke: invokeMock,
  })),
}));

import { summarizeText } from "./summarize";

describe("summarizeText", () => {
  beforeEach(() => {
    invokeMock.mockReset();
  });

  it("returns the mocked summary", async () => {
    invokeMock.mockResolvedValue(new AIMessage("Short summary."));
    const result = await summarizeText("Long input text");
    expect(result).toBe("Short summary.");
  });
});
  1. If you want stronger tests, mock at the runnable boundary instead of only mocking the constructor.
    This lets you assert the exact prompt input and return shape without depending on provider-specific behavior.
// src/prompt.test.ts
import { describe, expect, it } from "vitest";
import { PromptTemplate } from "@langchain/core/prompts";
import { AIMessage } from "@langchain/core/messages";

describe("prompt formatting", () => {
  it("formats the prompt deterministically", async () => {
    const prompt = PromptTemplate.fromTemplate(
      "Summarize this text in one sentence:\n\n{text}"
    );

    const formatted = await prompt.format({ text: "Alpha beta gamma" });
    expect(formatted).toContain("Alpha beta gamma");
    expect(formatted).toMatch(/Summarize this text/);

    const fakeModelOutput = new AIMessage("Summary.");
    expect(fakeModelOutput.content).toBe("Summary.");
  });
});
  1. For more advanced chains, extract dependencies and inject them into your function.
    This pattern makes retries, fallbacks, and alternate models easy to test because the chain no longer owns its own runtime dependencies.
// src/summarizeInjected.ts
import type { BaseLanguageModelInput } from "@langchain/core/language_models/base";
import { PromptTemplate } from "@langchain/core/prompts";

type LLMLike = {
  invoke(input: BaseLanguageModelInput): Promise<{ content: unknown }>;
};

export async function summarizeWithModel(
  text: string,
  llm: LLMLike
): Promise<string> {
  const prompt = PromptTemplate.fromTemplate(
    "Summarize this text in one sentence:\n\n{text}"
  );

  const chain = prompt.pipe(llm as never);
  const result = await chain.invoke({ text });

  return String(result.content);
}
  1. Test the injected version with a plain object mock.
    This is usually the cleanest option for application code because your tests stay fast and don’t need module mocking at all.
// src/summarizeInjected.test.ts
import { describe, expect, it, vi } from "vitest";
import { AIMessage } from "@langchain/core/messages";
import { summarizeWithModel } from "./summarizeInjected";

describe("summarizeWithModel", () => {
  it("uses an injected fake model", async () => {
    const llm = {
      invoke: vi.fn().mockResolvedValue(new AIMessage("Injected summary.")),
    };

    const result = await summarizeWithModel("Some long article", llm);
    expect(result).toBe("Injected summary.");
    expect(llm.invoke).toHaveBeenCalledTimes(1);
  });
});

Testing It

Run your test suite with vitest or jest and confirm that no network requests are made. If you see an API call attempt, your code is still constructing the model inside the function or importing a real client too early.

A good sanity check is to temporarily set OPENAI_API_KEY to an invalid value and rerun the tests; mocked tests should still pass. Also verify that your assertions cover both output and interaction counts, especially if you care about retries or branch selection.

Next Steps

  • Add contract tests for tool-calling chains by mocking structured tool responses.
  • Learn how to use RunnableSequence, withRetry, and withFallbacks without making tests flaky.
  • Move from unit mocks to one small integration test that hits a real provider behind an env flag.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides