LangGraph Tutorial (Python): adding observability for beginners
This tutorial shows you how to add basic observability to a LangGraph app in Python using LangSmith tracing. By the end, you’ll be able to inspect each graph run, see node-level execution, and debug failures without sprinkling print statements everywhere.
What You'll Need
- •Python 3.10+
- •A LangSmith account
- •A LangSmith API key
- •
langgraph - •
langchain-core - •
langchain-openai - •An OpenAI API key
- •Basic familiarity with LangGraph nodes, edges, and state
Install the packages first:
pip install langgraph langchain-core langchain-openai langsmith
Set your environment variables:
export OPENAI_API_KEY="your-openai-key"
export LANGSMITH_API_KEY="your-langsmith-key"
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="langgraph-observability-demo"
Step-by-Step
- •Start with a minimal graph so you have something real to trace. We’ll use a single-node workflow that takes a message and rewrites it with an LLM.
from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
message: str
response: str
llm = ChatOpenAI(model="gpt-4o-mini")
def rewrite_message(state: State) -> dict:
result = llm.invoke(f"Rewrite this professionally: {state['message']}")
return {"response": result.content}
builder = StateGraph(State)
builder.add_node("rewrite_message", rewrite_message)
builder.add_edge(START, "rewrite_message")
builder.add_edge("rewrite_message", END)
graph = builder.compile()
- •Add tracing configuration through environment variables. LangGraph and LangChain will automatically send traces to LangSmith when tracing is enabled, so you usually do not need custom logging hooks for the first pass.
import os
required_vars = [
"OPENAI_API_KEY",
"LANGSMITH_API_KEY",
"LANGSMITH_TRACING",
"LANGSMITH_PROJECT",
]
missing = [name for name in required_vars if not os.getenv(name)]
if missing:
raise RuntimeError(f"Missing environment variables: {', '.join(missing)}")
print("Tracing is enabled:", os.getenv("LANGSMITH_TRACING"))
print("Project:", os.getenv("LANGSMITH_PROJECT"))
- •Invoke the graph with a real input and keep the result. This run will appear in LangSmith as a trace with the graph execution plus the LLM call inside your node.
input_state = {"message": "we need this by friday"}
output = graph.invoke(input_state)
print("Input:", input_state)
print("Output:", output)
- •Make observability more useful by adding metadata and tags at invoke time. This is how you separate runs by environment, customer segment, or workflow type when you start debugging production traffic.
output = graph.invoke(
{"message": "please review the attached document"},
config={
"tags": ["demo", "observability", "langgraph"],
"metadata": {
"environment": "dev",
"team": "platform",
"workflow": "rewrite-message",
},
},
)
print(output)
- •Add a second node so you can see multi-step execution in the trace UI. Once you have branching or chained nodes, observability becomes much more valuable because you can inspect where latency and failures happen.
from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
message: str
draft: str
response: str
llm = ChatOpenAI(model="gpt-4o-mini")
def draft_message(state: State) -> dict:
result = llm.invoke(f"Draft a concise business message about: {state['message']}")
return {"draft": result.content}
def polish_message(state: State) -> dict:
result = llm.invoke(f"Polish this draft for clarity:\n{state['draft']}")
return {"response": result.content}
builder = StateGraph(State)
builder.add_node("draft_message", draft_message)
builder.add_node("polish_message", polish_message)
builder.add_edge(START, "draft_message")
builder.add_edge("draft_message", "polish_message")
builder.add_edge("polish_message", END)
graph = builder.compile()
Testing It
Run the script once with valid API keys and open your LangSmith project dashboard. You should see one trace per graph.invoke() call, plus nested spans for each node and LLM request.
If tracing is working correctly, the trace will show:
- •The graph run name or project name you configured
- •Node execution order
- •Input and output payloads for each step
- •Any errors raised inside a node
If nothing appears in LangSmith, check these first:
- •
LANGSMITH_TRACINGis set to"true" - •Your API key is valid
- •You are looking at the correct project name
- •The process had enough time to flush traces before exiting
A quick sanity check is to intentionally break a node, for example by referencing a missing state key. The failure should show up in LangSmith with the exact node that crashed, which is the whole point of adding observability early.
Next Steps
- •Add custom callbacks for structured logging alongside LangSmith traces.
- •Learn how to attach user IDs and request IDs through
config["metadata"]. - •Move from simple linear graphs to conditional routing and inspect those branches in traces.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit