LangGraph Tutorial (Python): adding audit logs for beginners
This tutorial shows how to add audit logs to a LangGraph Python app so every important state change, node execution, and final result is captured in a structured way. You need this when you’re building regulated workflows, debugging agent behavior, or proving to an internal reviewer why the graph made a specific decision.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •No API key is required for this example
- •A terminal and a virtual environment
- •Basic familiarity with LangGraph nodes, edges, and state
Install the packages:
pip install langgraph langchain-core
Step-by-Step
- •First, define a simple graph state and an audit log structure. The audit log will be stored separately from the business state so it’s easy to export later.
from typing import TypedDict, Annotated
from operator import add
from langgraph.graph import StateGraph, START, END
class GraphState(TypedDict):
input_text: str
processed_text: str
audit_log: Annotated[list[str], add]
- •Next, create a small helper that writes audit entries in a consistent format. In production, this could write to a database or append-only file, but for beginners we’ll keep it in-memory and deterministic.
def audit_entry(step: str, message: str) -> str:
return f"[{step}] {message}"
def normalize_text(state: GraphState) -> dict:
text = state["input_text"].strip()
return {
"processed_text": text.lower(),
"audit_log": [
audit_entry("normalize_text", f"normalized input='{state['input_text']}' to '{text.lower()}'")
],
}
- •Add another node that performs a second transformation and logs what happened. Notice that each node returns both business output and an audit record in the same shape.
def enrich_text(state: GraphState) -> dict:
processed = state["processed_text"]
enriched = f"{processed} | length={len(processed)}"
return {
"processed_text": enriched,
"audit_log": [
audit_entry("enrich_text", f"enriched text to '{enriched}'")
],
}
- •Build the graph by wiring the nodes together and compiling it. The key detail here is the
Annotated[list[str], add]field onaudit_log, which tells LangGraph how to merge multiple log entries across steps.
graph = StateGraph(GraphState)
graph.add_node("normalize_text", normalize_text)
graph.add_node("enrich_text", enrich_text)
graph.add_edge(START, "normalize_text")
graph.add_edge("normalize_text", "enrich_text")
graph.add_edge("enrich_text", END)
app = graph.compile()
- •Run the graph with an initial input and print both the final result and the audit trail. This gives you a clean pattern you can reuse in real apps where logs must be inspectable after execution.
result = app.invoke(
{
"input_text": " Hello LangGraph ",
"processed_text": "",
"audit_log": [],
}
)
print("Final output:")
print(result["processed_text"])
print("\nAudit log:")
for line in result["audit_log"]:
print(line)
- •If you want better observability, add metadata like request IDs or user IDs into your state before execution. That lets you trace one run across multiple systems without guessing which log lines belong together.
class GraphStateWithMeta(TypedDict):
request_id: str
input_text: str
processed_text: str
audit_log: Annotated[list[str], add]
def normalize_with_meta(state: GraphStateWithMeta) -> dict:
text = state["input_text"].strip().lower()
return {
"processed_text": text,
"audit_log": [
f"[request_id={state['request_id']}] normalize_with_meta -> '{text}'"
],
}
Testing It
Run the script and confirm that result["processed_text"] contains the transformed string from both nodes. Then check that result["audit_log"] has one entry per node execution in the correct order.
If you want to test failure handling, make one node raise an exception and verify whether your application layer records that failure separately from the graph state. For real systems, you usually want both: graph-level audit entries for successful steps and external error logs for exceptions.
A good sanity check is to change the input text and rerun it several times. The output should change predictably, while the log format stays stable enough for downstream parsing or storage.
Next Steps
- •Add timestamps and actor IDs to each audit entry using a structured dict instead of plain strings.
- •Write audit logs to PostgreSQL or OpenSearch instead of keeping them only in memory.
- •Learn LangGraph callbacks and tracing so you can correlate node-level logs with full run traces.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit