LangGraph Tutorial (TypeScript): mocking LLM calls in tests for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphmocking-llm-calls-in-tests-for-intermediate-developerstypescript

This tutorial shows how to test a LangGraph workflow in TypeScript without calling a real model. You’ll replace LLM calls with deterministic mocks so your tests are fast, stable, and cheap.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with ts-node or a build step
  • @langchain/langgraph
  • @langchain/core
  • A test runner like vitest or jest
  • No API key required for the mocked test path
  • Optional: OPENAI_API_KEY if you want to compare mocked vs real runs later

Step-by-Step

  1. Install the packages and set up a minimal test project.
    We only need LangGraph plus the core message types and a test runner.
npm install @langchain/langgraph @langchain/core
npm install -D typescript tsx vitest @types/node
  1. Create a graph that calls an injected model function instead of hardcoding an LLM client.
    This is the key pattern: your graph should depend on a function you can swap in tests.
// src/graph.ts
import { Annotation, END, StateGraph } from "@langchain/langgraph";
import { AIMessage, HumanMessage, BaseMessage } from "@langchain/core/messages";

export type ModelFn = (messages: BaseMessage[]) => Promise<AIMessage>;

const GraphState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

export function buildGraph(model: ModelFn) {
  const node = async (state: typeof GraphState.State) => {
    const reply = await model(state.messages);
    return { messages: [reply] };
  };

  const graph = new StateGraph(GraphState)
    .addNode("chat", node)
    .addEdge("__start__", "chat")
    .addEdge("chat", END);

  return graph.compile();
}

export { AIMessage, HumanMessage };
  1. Write a mock model that returns deterministic responses based on the last user message.
    This lets you assert exact outputs without network calls or flaky behavior.
// tests/mockModel.ts
import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";

export async function mockModel(messages: BaseMessage[]) {
  const lastUser = [...messages].reverse().find((m) => m instanceof HumanMessage);

  if (!lastUser) {
    return new AIMessage("No user input found.");
  }

  const content = String(lastUser.content);

  if (content.includes("hello")) {
    return new AIMessage("Hi from mock.");
  }

  if (content.includes("refund")) {
    return new AIMessage("Refunds are handled by support.");
  }

  return new AIMessage(`Mocked reply for: ${content}`);
}
  1. Test the graph by invoking it with a human message and asserting the final state.
    The graph still runs normally; only the model call is mocked.
// tests/graph.test.ts
import { describe, expect, it } from "vitest";
import { HumanMessage } from "@langchain/core/messages";
import { buildGraph } from "../src/graph";
import { mockModel } from "./mockModel";

describe("LangGraph mock testing", () => {
  it("returns a deterministic mocked response", async () => {
    const graph = buildGraph(mockModel);

    const result = await graph.invoke({
      messages: [new HumanMessage("hello there")],
    });

    expect(result.messages).toHaveLength(2);
    expect(result.messages[1].content).toBe("Hi from mock.");
  });

  it("handles domain-specific prompts", async () => {
    const graph = buildGraph(mockModel);

    const result = await graph.invoke({
      messages: [new HumanMessage("I need a refund")],
    });

    expect(result.messages[1].content).toBe(
      "Refunds are handled by support."
    );
  });
});
  1. If your production code uses a real chat model, keep that in a separate factory and inject it into the same graph builder.
    That way your app and tests share the same workflow logic while swapping only the dependency.
// src/realModel.ts
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, BaseMessage } from "@langchain/core/messages";

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

export async function realModel(messages: BaseMessage[]) {
  const response = await llm.invoke(messages);
  return new AIMessage(response.content as string);
}

Testing It

Run the test suite with your runner of choice:

npx vitest run

You should see both tests pass without any API traffic. If one fails, check that your mock returns an AIMessage and that your assertions match the exact final state shape LangGraph produces.

A good sanity check is to log result.messages once and inspect the order. In this pattern, you should always see the original HumanMessage first and the mocked AIMessage second.

Next Steps

  • Add tool nodes and mock tool outputs the same way you mocked the model
  • Use table-driven tests for multiple prompt variants and edge cases
  • Split graphs into reusable subgraphs so each unit can be tested in isolation

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides