LangChain Tutorial (TypeScript): testing agents locally for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchaintesting-agents-locally-for-advanced-developerstypescript

This tutorial shows you how to run and test a LangChain agent locally in TypeScript without wiring it into a web app first. You need this when you want fast iteration on prompts, tools, and agent behavior before you expose anything to users or production infra.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with ts-node or tsx
  • Packages:
    • langchain
    • @langchain/openai
    • zod
    • dotenv
  • An OpenAI API key in .env as OPENAI_API_KEY=...
  • Basic familiarity with LangChain message history, tools, and chat models

Step-by-Step

  1. Start by creating a minimal TypeScript project and installing the dependencies. Keep the setup boring; the point is to test agent logic locally, not fight your build chain.
mkdir lc-agent-local-test
cd lc-agent-local-test
npm init -y
npm install langchain @langchain/openai zod dotenv
npm install -D typescript tsx @types/node
npx tsc --init --rootDir src --outDir dist --module nodenext --moduleResolution nodenext --target es2022 --esModuleInterop true
mkdir src
  1. Add your environment variable and a simple tool set. For local testing, use deterministic tools that are easy to inspect, like math helpers or lookup functions.
// src/tools.ts
import { tool } from "@langchain/core/tools";
import { z } from "zod";

export const addTool = tool(
  async ({ a, b }) => String(a + b),
  {
    name: "add",
    description: "Add two numbers together.",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);

export const multiplyTool = tool(
  async ({ a, b }) => String(a * b),
  {
    name: "multiply",
    description: "Multiply two numbers together.",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);
  1. Build the agent with a real chat model and bind the tools directly. This keeps the test surface small and lets you inspect whether the model is choosing the right tool calls.
// src/agent.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { addTool, multiplyTool } from "./tools.js";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const llmWithTools = model.bindTools([addTool, multiplyTool]);

async function main() {
  const messages = [
    new HumanMessage("What is 12 + 8, then multiply that result by 3? Use tools."),
  ];

  const response = await llmWithTools.invoke(messages);
  console.log(JSON.stringify(response, null, 2));
}

main().catch(console.error);
  1. If you want actual agent execution instead of just tool binding, use LangChain’s agent executor pattern. This is what you test when you care about multi-step reasoning and repeated tool use.
// src/executor.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIToolsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { addTool, multiplyTool } from "./tools.js";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a precise math assistant. Use tools when needed."],
  ["human", "{input}"],
]);

async function main() {
  const tools = [addTool, multiplyTool];
  const agent = await createOpenAIToolsAgent({
    llm,
    tools,
    prompt,
  });

  const executor = new AgentExecutor({
    agent,
    tools,
    verbose: true,
  });

  const result = await executor.invoke({
    input: "What is (12 + 8) * 3?",
  });

  console.log(result.output);
}

main().catch(console.error);
  1. Add a repeatable local test harness so you can run assertions against outputs. For advanced development, this matters more than manual console checks because it catches regressions in prompts and tool routing.
// src/test-agent.ts
import "dotenv/config";
import assert from "node:assert/strict";
import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIToolsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { addTool, multiplyTool } from "./tools.js";

async function run() {
  const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
  const prompt = ChatPromptTemplate.fromMessages([
    ["system", "You are a precise math assistant."],
    ["human", "{input}"],
  ]);

  const tools = [addTool, multiplyTool];
  const agent = await createOpenAIToolsAgent({ llm, tools, prompt });
// src/test-agent.ts continued
const executor = new AgentExecutor({ agent, tools });

const result = await executor.invoke({ input: "What is (12 + 8) * 3?" });
assert.ok(result.output.includes("60"), `Unexpected output: ${result.output}`);

console.log("Test passed:", result.output);
}

run().catch((err) => {
console.error(err);
process.exit(1);
});
  1. Wire up scripts so you can run both the interactive check and the assertion-based test quickly. You want one command for debugging behavior and another for verifying expected output.
{
	"scripts": {
		"agent": "tsx src/agent.ts",
		"exec": "tsx src/executor.ts",
		"test": "tsx src/test-agent.ts"
	}
}

Testing It

Run npm run exec first if you want to watch the agent reason through the task with verbose tool calls. Then run npm run test to confirm the final answer still contains 60.

If the test fails, inspect whether the model skipped a tool call or returned extra text that breaks your assertion. For local agent work, that usually means your system prompt is too loose or your tool descriptions are too vague.

If you want deeper verification, log intermediate steps by keeping verbose: true on the executor and checking whether each call matches your intended flow. That’s how you catch bad routing before it turns into flaky app behavior.

Next Steps

  • Add more realistic tools backed by local mocks for CRM lookups or policy data.
  • Test structured outputs with Zod schemas instead of plain string answers.
  • Move from single-case assertions to table-driven tests for multiple prompts and edge cases.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides