LangGraph Tutorial (Python): adding tool use for intermediate developers
This tutorial shows how to add tool use to a LangGraph agent in Python, using the standard tool-calling loop: model decides, tool runs, model responds. You need this when your agent must do something concrete like fetch account data, look up policy terms, or query an internal API instead of only generating text.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-openai - •
langchain-core - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangGraph state graphs and chat models
- •A terminal and a virtual environment
Install the packages:
pip install langgraph langchain-openai langchain-core
Step-by-Step
- •Start with a minimal graph state that stores chat messages. LangGraph works cleanly when you keep state explicit and let nodes transform it.
from typing import Annotated, Sequence
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[Sequence, add_messages]
- •Define the tools your agent can call. Keep them small and deterministic; for production systems, tools should wrap real services or internal functions with clear inputs and outputs.
from langchain_core.tools import tool
@tool
def get_policy_status(policy_id: str) -> str:
"""Return the current status of an insurance policy."""
if policy_id == "POL123":
return "Policy POL123 is active and paid through 2026-01-01."
return f"Policy {policy_id} was not found."
tools = [get_policy_status]
- •Create the model node and bind the tools to it. The model needs tool metadata so it can emit structured tool calls instead of plain text.
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools(tools)
def assistant(state: State):
response = llm_with_tools.invoke(state["messages"])
return {"messages": [response]}
- •Add a tool execution node that reads the last AI message and runs every requested tool call. This is the part most people miss: the assistant does not execute tools itself, LangGraph routes those calls into a separate node.
from langchain_core.messages import ToolMessage
def tool_node(state: State):
last_message = state["messages"][-1]
tool_messages = []
for tool_call in last_message.tool_calls:
if tool_call["name"] == "get_policy_status":
result = get_policy_status.invoke(tool_call["args"])
tool_messages.append(
ToolMessage(content=result, tool_call_id=tool_call["id"])
)
return {"messages": tool_messages}
- •Wire the graph with conditional routing so it loops back to the assistant after tools run. The model should keep going until it stops asking for tools.
from langgraph.graph import StateGraph, START, END
def should_continue(state: State):
last_message = state["messages"][-1]
if getattr(last_message, "tool_calls", None):
return "tools"
return END
graph_builder = StateGraph(State)
graph_builder.add_node("assistant", assistant)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge(START, "assistant")
graph_builder.add_conditional_edges("assistant", should_continue)
graph_builder.add_edge("tools", "assistant")
app = graph_builder.compile()
- •Run the graph with a user message and print the final answer. Use a real prompt that forces a lookup so you can see the full tool loop in action.
from langchain_core.messages import HumanMessage
result = app.invoke(
{
"messages": [
HumanMessage(content="Check policy POL123 and tell me whether it is active.")
]
}
)
print(result["messages"][-1].content)
Testing It
Run the script and confirm that you see at least one tool call before the final response is printed. If everything is wired correctly, the assistant should ask for get_policy_status, LangGraph should execute it, and then the assistant should answer using the returned text.
A good sanity check is to change POL123 to an unknown policy ID and verify that the output changes accordingly. That tells you both branches are working: successful lookup and not-found handling.
If you want more visibility during debugging, print each message in result["messages"] and inspect where tool_calls appear. In production, this same trace becomes your audit trail for agent decisions.
Next Steps
- •Add multiple tools and route them through a real dispatcher instead of hardcoding names.
- •Replace the fake lookup with a database query or internal REST API call.
- •Learn how to add guardrails for retries, timeouts, and malformed tool arguments.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit