LangChain Tutorial (TypeScript): testing agents locally for beginners
This tutorial shows you how to build a small LangChain agent in TypeScript and test it locally without wiring up a full app or deploying anything. You need this when you want to debug agent behavior, tool calls, and prompts before putting the agent behind an API or into production.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
npmorpnpm - •An OpenAI API key
- •These packages:
- •
langchain - •
@langchain/openai - •
@langchain/core - •
typescript - •
tsx
- •
Step-by-Step
- •Set up a minimal TypeScript project and install the dependencies. Keep this local and simple so you can iterate on prompts and tools without extra infrastructure.
mkdir langchain-agent-local-test
cd langchain-agent-local-test
npm init -y
npm install langchain @langchain/openai @langchain/core
npm install -D typescript tsx @types/node
npx tsc --init
- •Create an environment file with your API key. LangChain will read this at runtime, so your code stays clean and you can swap keys easily during testing.
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_api_key_here
EOF
- •Build a small agent with one real tool. This example gives the agent a calculator-style tool so you can verify tool calling locally instead of guessing whether the model used it.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const multiplyTool = new DynamicStructuredTool({
name: "multiply",
description: "Multiply two numbers together",
schema: z.object({
a: z.number(),
b: z.number(),
}),
func: async ({ a, b }) => `${a * b}`,
});
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function main() {
const response = await llm.invoke([
{
role: "system",
content:
"You are a helpful assistant. Use the multiply tool when asked to compute multiplication.",
},
{ role: "user", content: "What is 17 times 23?" },
], {
tools: [multiplyTool],
});
console.log(response.content);
}
main();
- •Add a local test runner that exercises the same logic repeatedly. For beginner debugging, plain Node execution is enough; you want fast feedback on whether the model answers correctly and whether your tool schema is valid.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const multiplyTool = new DynamicStructuredTool({
name: "multiply",
description: "Multiply two numbers together",
schema: z.object({ a: z.number(), b: z.number() }),
func: async ({ a, b }) => `${a * b}`,
});
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
async function runTest(prompt: string) {
const result = await llm.invoke(
[
{
role: "system",
content:
"Use tools when needed. Return concise answers.",
},
{ role: "user", content: prompt },
],
{ tools: [multiplyTool] }
);
console.log(`PROMPT: ${prompt}`);
console.log(`ANSWER: ${result.content}`);
}
async function main() {
await runTest("What is 8 times 9?");
await runTest("What is the product of 12 and 14?");
}
main();
- •Run it locally and inspect the output. If the answer is wrong, change the system message first, then adjust the tool description or schema before touching anything else.
npx tsx index.ts
Testing It
Run the script several times with different prompts that should clearly trigger the tool, like “What is 6 times 7?” and “Multiply 19 by 4.” If your setup is working, you should see consistent numeric answers and no TypeScript or runtime import errors.
If the model answers in natural language but gets the math wrong, tighten the system prompt so it explicitly says to use tools for arithmetic. If you get schema errors, check that your tool arguments are typed with z.number() and that your prompt does not send strings like "17" where numbers are expected.
For more realistic agent testing, keep a small list of prompts in a file and run them as regression tests before every change. That gives you a cheap way to catch prompt drift and broken tool definitions early.
Next Steps
- •Add Jest or Vitest so your prompts become repeatable regression tests.
- •Replace the single tool with multiple domain tools, like policy lookup or claims status lookup.
- •Move from direct LLM calls to an actual LangChain agent executor once you need multi-step reasoning and tool selection.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit