How to Fix 'intermittent 500 errors during development' in LangChain (TypeScript)
Intermittent 500 errors in LangChain TypeScript usually mean your app is throwing at runtime, but not on every request. In development, this often shows up when you mix async model calls, streaming handlers, or serverless-style route handlers with unhandled exceptions.
The annoying part is that the stack trace often points at LangChain internals like RunnableSequence.invoke, ChatOpenAI.invoke, or OpenAIEmbeddings.embedDocuments, while the real bug is in your code around input shape, async handling, or request lifecycle.
The Most Common Cause
The #1 cause I see is not awaiting an async LangChain call inside a route handler or service method. The request returns before the chain finishes, then the thrown error becomes an unhandled rejection or gets swallowed by the framework and surfaces as a generic 500.
Typical runtime symptoms include:
- •
Error: Failed to parse text - •
TypeError: Cannot read properties of undefined - •
UnhandledPromiseRejectionWarning - •
Error: OpenAI API returned an error
Here’s the broken pattern next to the fixed one.
| Broken | Fixed |
|---|---|
| ```ts | |
| import { ChatOpenAI } from "@langchain/openai"; | |
| import { PromptTemplate } from "@langchain/core/prompts"; |
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
export async function POST(req: Request) { const { question } = await req.json();
const prompt = PromptTemplate.fromTemplate( "Answer this question: {question}" );
const chain = prompt.pipe(llm);
// ❌ Missing await const result = chain.invoke({ question });
return Response.json({ answer: result });
}
|ts
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
export async function POST(req: Request) { try { const { question } = await req.json();
const prompt = PromptTemplate.fromTemplate(
"Answer this question: {question}"
);
const chain = prompt.pipe(llm);
// ✅ Await the promise
const result = await chain.invoke({ question });
return Response.json({ answer: result.content });
} catch (err) { console.error("LangChain route failed:", err); return Response.json( { error: "Internal Server Error" }, { status: 500 } ); } }
If you are using `.stream()`, `.invoke()`, `.batch()`, or `.ainvoke()`, treat them as real async boundaries. Don’t let them escape a request handler without `await` and a `try/catch`.
## Other Possible Causes
### 1. Invalid input shape passed into the chain
LangChain components are strict about what they receive. A prompt expecting `{ question: string }` will fail if you pass `{ query: string }` or a raw string.
```ts
// Broken
await chain.invoke({ query: "What is SOC 2?" });
// Fixed
await chain.invoke({ question: "What is SOC 2?" });
You’ll often see errors like:
- •
TypeError: Missing value for input variable 'question' - •
Error: Input to PromptTemplate is missing variables
2. Streaming response not closed correctly
If you use StreamingTextResponse or manual streams, forgetting to close the stream can produce intermittent failures only under load.
// Broken
const stream = await llm.stream("Hello");
// never consumed or closed
// Fixed
const stream = await llm.stream("Hello");
for await (const chunk of stream) {
controller.enqueue(chunk);
}
controller.close();
In Next.js route handlers, also make sure you’re not returning a stream after the request context has already been torn down.
3. Model/API key misconfiguration in dev environment
A missing or wrong environment variable often looks intermittent because one terminal session has it and another doesn’t.
// Broken
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Fixed
if (!process.env.OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY is not set");
}
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
Common messages:
- •
Error: OpenAI API key not found - •
401 Unauthorized - •
Incorrect API key provided
4. Callback/handler throwing inside LangChain hooks
If you attach custom callbacks and throw inside them, LangChain may surface it as a chain failure.
const llm = new ChatOpenAI({
callbacks: [
{
handleLLMEnd() {
throw new Error("metrics sink failed");
},
},
],
});
Keep callback code defensive:
callbacks: [
{
handleLLMEnd() {
try {
// send metrics
} catch (err) {
console.error("callback failed", err);
}
},
},
];
How to Debug It
- •
Wrap the exact LangChain call in a try/catch
- •Don’t catch at the top of the request only.
- •Log the class name and method being called:
- •
RunnableSequence.invoke - •
ChatOpenAI.invoke - •
OpenAIEmbeddings.embedQuery
- •
Print the exact input payload
- •Compare what your prompt expects versus what you pass.
- •Log keys, not secrets:
console.log("input keys:", Object.keys(input)); - •
Disable streaming and callbacks temporarily
- •Reduce the moving parts.
- •If the error disappears without streaming, your bug is in response handling.
- •If it disappears without callbacks, inspect your callback code next.
- •
Run with strict env checks
- •Fail fast on startup if env vars are missing.
- •In dev, print which model and provider are configured:
console.log({ model: process.env.OPENAI_MODEL, hasKey: Boolean(process.env.OPENAI_API_KEY), });
Prevention
- •Always wrap LangChain calls in a local
try/catchclose to the invocation point. - •Validate input before calling
.invoke()so prompt variables match exactly. - •Add startup checks for required env vars and provider config.
- •Keep streaming and callback logic isolated until basic non-streaming calls are stable.
If you’re seeing intermittent 500s, don’t start by blaming LangChain itself. In TypeScript apps, it’s usually one of three things: an unawaited promise, mismatched input shape, or broken stream/error handling around the chain call.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit