How to Fix 'intermittent 500 errors in production' in LangChain (TypeScript)
Intermittent 500 errors in production usually mean your LangChain app is failing under specific runtime conditions, not every request. In TypeScript, the usual pattern is: local tests pass, but production traffic triggers timeouts, rate limits, bad tool calls, or unhandled promise rejections inside the chain.
With LangChain, these failures often show up as generic server errors from your API layer while the real issue is buried in a nested exception like AIMessage parsing failure, OpenAIError, ECONNRESET, or a tool invocation mismatch.
The Most Common Cause
The #1 cause I see is unhandled async failures inside a chain or agent step, usually from a model call, tool call, or parser step that gets swallowed until your HTTP handler returns 500.
This happens a lot when people call .invoke() or .stream() without wrapping the full request path in try/catch, or when they mix callback-style code with promises and forget to await the final result.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Errors bubble out as intermittent 500s | Errors are caught and mapped to deterministic responses |
Missing await on chain execution | Full async flow awaited |
| No logging of root error | Root error logged with context |
// broken.ts
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const prompt = PromptTemplate.fromTemplate(
"Answer this support question: {question}"
);
const chain = prompt.pipe(llm).pipe(new StringOutputParser());
export async function handler(req: Request) {
const { question } = await req.json();
// Missing try/catch around the full chain execution
const answer = chain.invoke({ question });
return Response.json({ answer }); // answer is a Promise here
}
// fixed.ts
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const prompt = PromptTemplate.fromTemplate(
"Answer this support question: {question}"
);
const chain = prompt.pipe(llm).pipe(new StringOutputParser());
export async function handler(req: Request) {
try {
const { question } = await req.json();
const answer = await chain.invoke({ question });
return Response.json({ answer });
} catch (err) {
console.error("LangChain request failed", err);
return Response.json(
{ error: "Internal Server Error" },
{ status: 500 }
);
}
}
If you are using an agent, the same issue shows up as:
- •
TypeError: Cannot read properties of undefined - •
Error: Failed to parse AI message content - •
Tool input validation failed
The important part is that the actual failure happens before your framework converts it into an HTTP 500.
Other Possible Causes
1) Tool schema mismatch
If your tool expects structured input but the model emits malformed arguments, LangChain throws during parsing.
// common failure
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const lookupPolicy = new DynamicStructuredTool({
name: "lookup_policy",
description: "Fetch policy details",
schema: z.object({
policyId: z.string(),
}),
func: async ({ policyId }) => `Policy ${policyId}`,
});
Typical runtime error:
- •
Error: Tool input validation failed - •
Failed to parse tool arguments
Fix by tightening prompts and validating tool inputs before execution.
2) Model rate limiting or transient provider failures
Production traffic can trigger:
- •
429 Too Many Requests - •
503 Service Unavailable - •
ECONNRESET
Use retries with backoff at the boundary:
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
maxRetries: 3,
});
If you already have retry logic at the API gateway, avoid double-retrying without bounds.
3) Context window overflow
Long chat histories can push requests over token limits. In LangChain this often looks like provider-specific token errors.
Common symptoms:
- •OpenAI-style token limit errors
- •Truncated messages
- •Random failures only on long conversations
Fix by trimming history:
import { trimMessages } from "@langchain/core/messages";
const trimmedMessages = await trimMessages(messages, {
maxTokens: 6000,
});
4) Environment/config drift between local and prod
This one causes a lot of “works on my machine” incidents.
Check for:
- •missing
OPENAI_API_KEY - •wrong region/model name
- •Node version mismatch
- •edge runtime incompatibility with dependencies
Example:
if (!process.env.OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY is missing");
}
Also verify you are not deploying Node-only code to an edge runtime that does not support it.
How to Debug It
- •
Log the real exception before returning HTTP 500
Don’t log only"Internal Server Error". Log the full stack and any LangChain metadata.catch (err) { console.error("chain failed", { message: err instanceof Error ? err.message : String(err), stack: err instanceof Error ? err.stack : undefined, }); throw err; } - •
Reproduce with one exact failing payload
Capture the request body that caused the incident and replay it locally against the same chain version. - •
Disable tools and agents temporarily
If plainChatOpenAI.invoke()works but agent execution fails, your problem is likely tool routing or parser-related. - •
Check provider response codes and token usage
If failures correlate with spikes in traffic or long prompts, inspect rate limits and context length first.
Prevention
- •Wrap every chain/agent entrypoint in a single
try/catchand return typed errors from your API layer. - •Add request-level observability:
- •prompt size
- •model name
- •tool names invoked
- •provider status codes
- •Keep prompts and tool schemas strict. Most intermittent production failures come from loose contracts between the model and your code.
If you want one rule to keep in mind: LangChain failures are rarely random. They are usually deterministic bugs exposed by production load, bad inputs, or missing async handling.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit