How to Fix 'async event loop error in production' in LangChain (Python)
If you’re seeing an async event loop error in production with LangChain, you’re usually dealing with Python’s asyncio rules being violated somewhere in your request path. The common symptom is a runtime crash like RuntimeError: This event loop is already running or RuntimeError: asyncio.run() cannot be called from a running event loop, often after moving code from a notebook or local script into FastAPI, Django, Celery, or another async-capable server.
The pattern is almost always the same: LangChain is being called with the wrong sync/async boundary, or something in your stack is trying to create or reuse an event loop incorrectly.
The Most Common Cause
The #1 cause is calling asyncio.run() inside code that’s already running inside an event loop.
This happens a lot when developers wrap LangChain async methods like ainvoke(), abatch(), or async chains inside helper functions that later get called from FastAPI endpoints, background tasks, or other async frameworks.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| ```python | |
| import asyncio | |
| from langchain_openai import ChatOpenAI |
llm = ChatOpenAI(model="gpt-4o-mini")
def get_answer(prompt: str): # ❌ Fails if called from an existing event loop return asyncio.run(llm.ainvoke(prompt))
Called from FastAPI / async context
answer = get_answer("Summarize this contract")
|python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
async def get_answer(prompt: str): # ✅ Use await inside async code return await llm.ainvoke(prompt)
Called from FastAPI / async context
answer = await get_answer("Summarize this contract")
If you need a sync entrypoint for a script, keep it at the top level only:
```python
import asyncio
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
async def main():
result = await llm.ainvoke("Summarize this contract")
print(result.content)
if __name__ == "__main__":
asyncio.run(main())
That works because asyncio.run() owns the loop only once, at process startup. It should not be nested inside request handlers, callbacks, or library code.
Other Possible Causes
1) Mixing sync LangChain calls with async server code
If you call blocking methods like .invoke() inside an async route, you can trigger loop contention or stall the server.
# Broken
@app.post("/chat")
async def chat():
result = chain.invoke({"question": "What is the policy?"})
return {"answer": result}
# Fixed
@app.post("/chat")
async def chat():
result = await chain.ainvoke({"question": "What is the policy?"})
return {"answer": result}
Use .invoke() in pure sync code and .ainvoke() in async code. Don’t mix them casually.
2) Creating a new event loop per request
This usually shows up when someone tries to “fix” the problem by manually creating loops.
# Broken
def handler():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return loop.run_until_complete(chain.ainvoke({"q": "test"}))
This can break under Uvicorn, Gunicorn workers, or any environment already managing its own loop.
# Fixed
async def handler():
return await chain.ainvoke({"q": "test"})
Let the framework manage the event loop unless you are writing a top-level script.
3) Running LangChain callbacks that do blocking I/O
Some callback handlers do file writes, HTTP calls, or database operations synchronously inside async execution paths.
class MyHandler(BaseCallbackHandler):
def on_llm_end(self, response, **kwargs):
save_to_db(response) # blocking call inside async flow
Fix it by making the callback async-aware if your stack supports it, or offload blocking work:
import asyncio
class MyHandler(BaseCallbackHandler):
def on_llm_end(self, response, **kwargs):
asyncio.create_task(async_save_to_db(response))
If the callback API only exposes sync hooks, move heavy work out of the callback path entirely.
4) Notebook-style code copied into production unchanged
Jupyter and IPython already run an event loop. Code that “works in notebook” often dies in production when wrapped differently.
# Broken in some environments
result = asyncio.run(chain.ainvoke({"q": "hello"}))
In notebooks:
result = await chain.ainvoke({"q": "hello"})
In production services:
@app.get("/health")
async def health():
return await chain.ainvoke({"q": "hello"})
The fix depends on where the code runs. Notebook code is not production-safe by default.
How to Debug It
- •
Read the exact exception text
- •Look for:
- •
RuntimeError: This event loop is already running - •
RuntimeError: asyncio.run() cannot be called from a running event loop - •
RuntimeError: There is no current event loop in thread '...'
- •
- •The wording tells you whether you’re nesting loops or calling async code from a non-async thread.
- •Look for:
- •
Trace where LangChain is called
- •Find whether you’re using:
- •
.invoke()vs.ainvoke() - •
.stream()vs.astream() - •
.batch()vs.abatch()
- •
- •Make sure each call matches the surrounding function type:
- •sync function → sync LangChain method
- •async function → async LangChain method
- •Find whether you’re using:
- •
Check your web framework boundary
- •In FastAPI/Starlette:
- •use
async defendpoints forawait
- •use
- •In Flask/Django sync views:
- •don’t call
awaitdirectly; either keep everything sync or isolate async work properly
- •don’t call
- •In Celery workers:
- •don’t assume there’s a running loop; initialize carefully if needed
- •In FastAPI/Starlette:
- •
Search for hidden
asyncio.run()- •It may not be in your main app file.
- •Check utility functions, retry wrappers, callback handlers, and custom tool implementations.
- •One nested
asyncio.run()anywhere in the call chain can trigger production failures.
Prevention
- •Keep one rule: sync code uses sync LangChain APIs; async code uses async APIs.
- •Don’t call
asyncio.run()inside request handlers, callbacks, or library helpers. - •Add tests for both execution paths:
- •local script execution
- •API server execution under Uvicorn/Gunicorn
If you standardize those boundaries early, this class of error disappears fast instead of showing up after deployment.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit