How to Fix 'async event loop error during development' in LangChain (Python)
If you’re seeing RuntimeError: asyncio.run() cannot be called from a running event loop or RuntimeError: This event loop is already running, you’re not dealing with a LangChain bug. You’re calling async code the wrong way for the runtime you’re in.
This usually shows up during development in Jupyter, FastAPI, Streamlit, or any app that already owns the event loop. LangChain’s async APIs are fine; the problem is how they’re being invoked.
The Most Common Cause
The #1 cause is calling asyncio.run() inside an environment that already has an active event loop.
That happens a lot when developers copy a notebook example into app code, or wrap LangChain async methods inside another async framework.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
Calls asyncio.run() from inside an async context | Uses await directly inside async code |
| Tries to start a new event loop | Reuses the existing event loop |
# BROKEN
import asyncio
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
async def ask():
response = await llm.ainvoke("What is LangChain?")
return response.content
# In FastAPI, Jupyter, or any async app:
result = asyncio.run(ask()) # RuntimeError: asyncio.run() cannot be called from a running event loop
print(result)
# FIXED
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
async def ask():
response = await llm.ainvoke("What is LangChain?")
return response.content
# In an async function:
# result = await ask()
If you are in plain synchronous Python, then asyncio.run() is fine:
import asyncio
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
async def ask():
response = await llm.ainvoke("What is LangChain?")
return response.content
if __name__ == "__main__":
print(asyncio.run(ask()))
The rule is simple:
- •Inside async code: use
await - •Outside async code: use
asyncio.run() - •Never nest
asyncio.run()inside an already-running loop
Other Possible Causes
1) Mixing sync and async LangChain methods incorrectly
A common mistake is using .invoke() in async code or .ainvoke() in sync code without handling the boundary properly.
# BROKEN
async def handler():
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
result = llm.invoke("Summarize this") # blocking call inside async flow
return result.content
# FIXED
async def handler():
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
result = await llm.ainvoke("Summarize this")
return result.content
If you’re using chains, same rule applies:
- •
.invoke()/.batch()for sync paths - •
.ainvoke()/.abatch()for async paths
2) Running LangChain inside Jupyter without top-level await support
Jupyter already runs an event loop. If you call asyncio.run() there, you’ll hit the error immediately.
# BROKEN in notebook cells
import asyncio
result = asyncio.run(chain.ainvoke({"question": "Hi"}))
# FIXED in notebook cells
result = await chain.ainvoke({"question": "Hi"})
If your notebook kernel doesn’t support top-level await, upgrade the environment instead of patching around it with hacks.
3) Wrapping an async chain inside a sync callback or tool
This shows up when using LangChain tools, agents, or callbacks that expect sync functions but you pass them coroutine-based logic.
# BROKEN
def tool_fn(query: str):
return chain.ainvoke({"query": query}) # returns coroutine, not result
# FIXED option A: make it async if the caller supports it
async def tool_fn(query: str):
return await chain.ainvoke({"query": query})
If the framework requires sync tools, keep the tool sync and call only sync LangChain APIs:
# FIXED option B: stay fully synchronous
def tool_fn(query: str):
return chain.invoke({"query": query})
4) Event-loop conflicts from FastAPI, Streamlit, or uvicorn reloads
Frameworks like FastAPI and Streamlit manage their own runtime. If your startup code calls asyncio.run(), it will fail under those hosts.
# BROKEN in FastAPI startup code
@app.on_event("startup")
def startup():
asyncio.run(warmup_chain()) # bad idea here
# FIXED in FastAPI startup code
@app.on_event("startup")
async def startup():
await warmup_chain()
For Streamlit, keep expensive async work behind supported patterns or move it into a background service.
How to Debug It
- •
Read the exact exception text
- •If you see
asyncio.run() cannot be called from a running event loop, you are nesting loops. - •If you see
This event loop is already running, you are likely in Jupyter or another managed runtime.
- •If you see
- •
Check where the call originates
- •Search for
asyncio.run(in your project. - •Search for
.invoke(insideasync deffunctions. - •Search for
.ainvoke(being returned withoutawait.
- •Search for
- •
Print the execution context
import asyncio try: loop = asyncio.get_running_loop() print(f"Running loop: {loop}") except RuntimeError: print("No running event loop")If there’s a running loop, don’t start another one.
- •
Reduce to one LangChain call
- •Replace your full agent/chain with a single
ChatOpenAI().ainvoke(...). - •If that works, the issue is your wrapper code, not LangChain itself.
- •Replace your full agent/chain with a single
Prevention
- •
Keep one clear boundary between sync and async code.
- •Sync entrypoint: use
.invoke()andasyncio.run() - •Async entrypoint: use
.ainvoke()andawait
- •Sync entrypoint: use
- •
Don’t copy notebook snippets into app servers without checking the runtime.
- •Jupyter, FastAPI, and Streamlit behave differently from plain scripts.
- •
Standardize your project style early.
- •If most of your app is async, make your chains, tools, and handlers async end-to-end.
- •If most of it is sync, stay sync unless you truly need concurrency.
The fix is usually small once you find the boundary violation. In practice, this error almost always means “wrong place to start an event loop,” not “LangChain is broken.”
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit