LangChain Tutorial (TypeScript): mocking LLM calls in tests for beginners
This tutorial shows you how to write TypeScript tests for LangChain code without calling a real LLM. You’ll mock the model layer so your tests are fast, deterministic, and safe to run in CI.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •A test runner:
vitestorjest - •LangChain packages:
- •
langchain - •
@langchain/core
- •
- •If you want to test real calls later:
- •
OPENAI_API_KEYor another provider key
- •
- •A project that already uses ESM or is configured for it
Install the packages:
npm install langchain @langchain/core
npm install -D vitest typescript tsx @types/node
Step-by-Step
- •Start with a small LangChain function that depends on an LLM. Keep the function narrow so you can mock only the model and not the whole app.
// src/summarize.ts
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
export async function summarizeTicket(ticket: string) {
const prompt = PromptTemplate.fromTemplate(
"Summarize this support ticket in one sentence:\n\n{ticket}"
);
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const chain = prompt.pipe(model);
const result = await chain.invoke({ ticket });
return result.content;
}
- •In tests, do not mock LangChain internals globally. Mock the specific class your code constructs, and return a predictable object that matches the shape your code reads.
// tests/summarize.test.ts
import { describe, it, expect, vi } from "vitest";
import { summarizeTicket } from "../src/summarize";
const invokeMock = vi.fn();
vi.mock("@langchain/openai", () => {
return {
ChatOpenAI: vi.fn().mockImplementation(() => ({
invoke: invokeMock,
})),
};
});
describe("summarizeTicket", () => {
it("returns the mocked summary", async () => {
invokeMock.mockResolvedValue({ content: "Customer cannot reset password." });
const output = await summarizeTicket("I forgot my password and cannot log in.");
expect(output).toBe("Customer cannot reset password.");
expect(invokeMock).toHaveBeenCalledTimes(1);
});
});
- •If your code uses chains built with
prompt.pipe(model), mocking onlyinvokeis enough because the chain calls into the model at runtime. This keeps your test focused on behavior instead of implementation details.
// src/triage.ts
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
export async function triageMessage(message: string) {
const prompt = ChatPromptTemplate.fromMessages([
["system", "Classify the message as billing, technical, or other."],
["human", "{message}"],
]);
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const chain = prompt.pipe(model);
const response = await chain.invoke({ message });
return response.content;
}
- •Add a second test that checks branching logic around the LLM output. This is where mocking pays off: you can force each path without waiting on network calls or paying for tokens.
// tests/triage.test.ts
import { describe, it, expect, vi } from "vitest";
import { triageMessage } from "../src/triage";
const invokeMock = vi.fn();
vi.mock("@langchain/openai", () => {
return {
ChatOpenAI: vi.fn().mockImplementation(() => ({
invoke: invokeMock,
})),
};
});
describe("triageMessage", () => {
it("handles billing messages", async () => {
invokeMock.mockResolvedValue({ content: "billing" });
const result = await triageMessage("I was charged twice.");
expect(result).toBe("billing");
});
it("handles technical messages", async () => {
invokeMock.mockResolvedValue({ content: "technical" });
const result = await triageMessage("The app crashes on login.");
expect(result).toBe("technical");
});
});
- •Wire up a test script and run it. If the mock is working, the tests should pass even with no API key set.
{
"scripts": {
"test": "vitest run"
}
}
Then run:
npm test
Testing It
Run the suite with your network disabled and no provider key in .env. If the tests still pass, you’ve successfully removed external LLM dependency from unit tests.
Also confirm that invokeMock was called with stable inputs by asserting on call count or inspecting arguments when needed. If a test becomes flaky, you’re probably asserting too much about prompt formatting instead of behavior.
A good rule here is simple: unit tests should verify your application logic, not whether OpenAI is reachable.
Next Steps
- •Learn how to mock streaming responses when you use
stream()instead ofinvoke() - •Add integration tests that hit a real model behind a separate CI job or tag
- •Move repeated mocks into a reusable test helper for multiple chains
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit