LangGraph Tutorial (TypeScript): mocking LLM calls in tests for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphmocking-llm-calls-in-tests-for-beginnerstypescript

This tutorial shows you how to test a LangGraph workflow in TypeScript without making real LLM calls. You need this when your tests must be fast, deterministic, and not burn tokens or fail because an API is down.

What You'll Need

  • Node.js 18+
  • A TypeScript project
  • @langchain/langgraph
  • @langchain/core
  • jest or vitest
  • ts-jest if you use Jest with TypeScript
  • No API key required for the mocked test path
  • Optional: OPENAI_API_KEY if you want to compare against a real model later

Step-by-Step

  1. Create a small graph that calls an LLM through a dependency you can replace in tests. The key idea is not to call new ChatOpenAI() directly inside the node logic, but to inject a function that returns the model output.
// src/graph.ts
import { Annotation, StateGraph } from "@langchain/langgraph";

export const GraphState = Annotation.Root({
  input: Annotation<string>(),
  output: Annotation<string>(),
});

export type GraphStateType = typeof GraphState.State;

export type LlmCaller = (input: string) => Promise<string>;

export function createGraph(callLlm: LlmCaller) {
  const graph = new StateGraph(GraphState)
    .addNode("generate", async (state) => {
      const output = await callLlm(state.input);
      return { output };
    })
    .addEdge("__start__", "generate")
    .addEdge("generate", "__end__");

  return graph.compile();
}
  1. Write your production implementation separately. This keeps the graph testable and lets you swap real and fake implementations without changing the workflow code.
// src/llm.ts
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

export async function realLlmCall(input: string): Promise<string> {
  const response = await model.invoke(input);
  return typeof response.content === "string"
    ? response.content
    : JSON.stringify(response.content);
}
  1. Build a test with a mock function. The mock should assert the input it receives and return a fixed value so your test stays deterministic.
// tests/graph.test.ts
import { describe, expect, it, vi } from "vitest";
import { createGraph } from "../src/graph";

describe("LangGraph workflow", () => {
  it("mocks the LLM call", async () => {
    const mockLlm = vi.fn(async (input: string) => {
      expect(input).toBe("Write a short insurance summary");
      return "Summary: policy active, premium paid.";
    });

    const app = createGraph(mockLlm);
    const result = await app.invoke({
      input: "Write a short insurance summary",
      output: "",
    });

    expect(mockLlm).toHaveBeenCalledTimes(1);
    expect(result.output).toBe("Summary: policy active, premium paid.");
  });
});
  1. If you prefer Jest, the same pattern works with jest.fn(). The important part is that the graph receives a function, not a hard-coded model instance.
// tests/graph.jest.test.ts
import { createGraph } from "../src/graph";

test("mocks the LLM call with Jest", async () => {
  const mockLlm = jest.fn(async (input: string) => {
    expect(input).toBe("Draft an underwriting note");
    return "Underwriting note approved.";
  });

  const app = createGraph(mockLlm);
  const result = await app.invoke({
    input: "Draft an underwriting note",
    output: "",
  });

  expect(mockLlm).toHaveBeenCalledTimes(1);
  expect(result.output).toBe("Underwriting note approved.");
});
  1. Add one more test for failure behavior. In production, your LLM wrapper can throw on bad responses or network issues, and your graph tests should cover that path too.
// tests/graph-error.test.ts
import { describe, expect, it } from "vitest";
import { createGraph } from "../src/graph";

describe("LangGraph error handling", () => {
  it("surfaces LLM errors", async () => {
    const failingLlm = async () => {
      throw new Error("LLM unavailable");
    };

    const app = createGraph(failingLlm);

    await expect(
      app.invoke({ input: "Test failure", output: "" })
    ).rejects.toThrow("LLM unavailable");
  });
});

Testing It

Run your test runner normally, and no network request should happen during these tests. If you used Vitest, run npx vitest; if you used Jest, run npx jest.

You should see the mocked response returned from the graph every time. If the test becomes flaky or slow, that usually means something in your graph is still calling a real external dependency.

A good sanity check is to temporarily disconnect your machine from the network and rerun the suite. If everything is wired correctly, these tests still pass because they never touch the API.

Next Steps

  • Mock multi-node graphs where each node depends on a different external service.
  • Add snapshot tests for structured outputs like JSON summaries or classification labels.
  • Wrap real model calls behind a small adapter layer so production and test code stay separate.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides