How to Fix 'agent infinite loop during development' in LangChain (TypeScript)
When LangChain says your agent is in an infinite loop, it usually means the model keeps producing tool calls or agent steps without ever reaching a final answer. In TypeScript projects, this typically shows up during development when the agent is wired to call tools but nothing in the loop can satisfy the model’s next step.
The common pattern is simple: the agent keeps asking for a tool result, gets one, then asks again with no exit condition. That’s usually a prompt issue, a tool design issue, or a bad agent executor config.
The Most Common Cause
The #1 cause is a tool that never gives the model enough information to stop.
You’ll often see this with AgentExecutor or createToolCallingAgent when the tool returns vague output like "done" or raw JSON that doesn’t answer the user’s request. The model interprets that as incomplete and keeps calling tools.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Tool returns generic text | Tool returns final, actionable data |
| Agent prompt doesn’t tell model when to stop | Prompt explicitly tells model to answer once it has enough info |
| No iteration guard | maxIterations / maxIterations-style limit set |
// ❌ Broken: tool output is too vague, agent loops
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const lookupCustomer = new DynamicStructuredTool({
name: "lookup_customer",
description: "Looks up customer by email",
schema: z.object({ email: z.string() }),
func: async ({ email }) => {
// returns too little context
return "found";
},
});
const agent = await createToolCallingAgent({
llm,
tools: [lookupCustomer],
prompt,
});
const executor = new AgentExecutor({
agent,
tools: [lookupCustomer],
});
await executor.invoke({
input: "Find the customer and tell me their risk score",
});
// ✅ Fixed: tool returns useful data and executor has guardrails
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const lookupCustomer = new DynamicStructuredTool({
name: "lookup_customer",
description: "Looks up customer by email and returns profile fields needed for answering",
schema: z.object({ email: z.string() }),
func: async ({ email }) => {
return JSON.stringify({
email,
riskScore: 82,
status: "high-risk",
reason: "Multiple recent chargebacks",
});
},
});
const agent = await createToolCallingAgent({
llm,
tools: [lookupCustomer],
prompt,
});
const executor = new AgentExecutor({
agent,
tools: [lookupCustomer],
maxIterations: 5,
});
await executor.invoke({
input: "Find the customer and tell me their risk score",
});
If you’re using createReactAgent, createOpenAIFunctionsAgent, or createToolCallingAgent, the same rule applies. The tool result must be good enough for the model to produce a final response without guessing.
Other Possible Causes
1) Your prompt encourages repeated tool use
If your system message says things like “always verify with tools” or “never answer without checking again,” you can trap the agent in a loop.
// Bad prompt fragment
"You must keep using tools until you are completely certain."
// Better prompt fragment
"Use tools only when needed. Once you have enough information, provide a final answer."
2) Tool schema doesn’t match actual output
A mismatched schema makes the model think it got partial data. This happens when your tool returns strings but your prompt expects structured fields.
// Returns string...
func: async () => "42"
// But prompt expects:
{
accountId: string;
balance: number;
}
Fix it by returning consistent structured JSON:
func: async () => JSON.stringify({ accountId: "123", balance: 42 })
3) One tool calls another tool indirectly
If Tool A invokes an LLM chain that can call Tool A again, you’ve built recursion into your stack.
// Risky pattern
const toolA = async () => {
return await chain.invoke("check status"); // chain can route back to same tool
};
Break the cycle by separating:
- •pure data fetch tools
- •reasoning chains
- •final response generation
4) Missing stop conditions in custom loops
If you wrote your own agent loop instead of using AgentExecutor, you may not be checking iteration count or repeated actions.
while (true) {
const action = await agent.plan(...);
// no break condition here
}
Use explicit guards:
for (let i = 0; i < maxIterations; i++) {
const action = await agent.plan(...);
}
How to Debug It
- •
Turn on verbose logging
- •In LangChain, enable tracing or verbose mode so you can see repeated
AgentActioncalls. - •Look for the same tool being called with nearly identical inputs.
- •In LangChain, enable tracing or verbose mode so you can see repeated
- •
Check whether the model ever emits a final answer
- •If you only see repeated
tool_callsand never a terminal assistant message, the issue is in prompt/tool design. - •With OpenAI-style models, inspect whether you get repeated function calls instead of content.
- •If you only see repeated
- •
Inspect the last few tool outputs
- •If they are short strings like
"ok","done", or empty objects, that’s usually the trigger. - •Make sure each output contains enough detail for one-shot completion.
- •If they are short strings like
- •
Add hard limits
- •Set
maxIterations. - •Add timeouts around external API calls.
- •Log repeated
(toolName, args)pairs to catch recursion fast.
- •Set
Prevention
- •Return structured, complete tool outputs.
- •Keep prompts explicit about when to stop calling tools.
- •Set iteration caps in every production agent:
new AgentExecutor({
agent,
tools,
maxIterations: 5,
});
- •Test agents with adversarial prompts like:
Keep checking until you're absolutely sure.
If that causes looping in dev, fix it before shipping.
The practical rule is this: if an agent can’t decide when it has enough information, it will keep asking for more. In LangChain TypeScript projects, that usually means tightening your tool outputs, simplifying your prompt, and putting hard limits on execution.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit