LangGraph Tutorial (Python): adding tool use for beginners
This tutorial shows how to add tool use to a LangGraph agent in Python, then wire it so the model can decide when to call a tool and when to answer directly. You need this when you want your graph to do more than chat: fetch live data, look up internal knowledge, or perform deterministic actions before responding.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-openai - •
langchain-core - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangGraph state graphs
- •A terminal and a virtual environment
Install the packages:
pip install langgraph langchain-openai langchain-core
Step-by-Step
- •First, define a simple tool. For beginners, keep it deterministic and easy to inspect so you can see exactly when the model chooses to call it.
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
weather_map = {
"london": "Cloudy, 14°C",
"nairobi": "Sunny, 26°C",
"new york": "Rainy, 9°C",
}
return weather_map.get(city.lower(), f"No weather data for {city}")
- •Next, create a chat model and bind the tool to it. Binding is what tells the model which tools exist and how to request them in its response.
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_weather]
llm_with_tools = llm.bind_tools(tools)
- •Now define graph state and the agent node. The agent node sends the conversation to the model and returns the updated message list.
from typing import Annotated, TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
def agent(state: State):
response = llm_with_tools.invoke(state["messages"])
return {"messages": [response]}
- •Add a tool node and a router. The router checks whether the last assistant message requested a tool call; if it did, execution goes to the tool node, otherwise the graph ends.
from langgraph.prebuilt import ToolNode
from langgraph.graph import END
tool_node = ToolNode(tools)
def should_call_tool(state: State):
last_message = state["messages"][-1]
if getattr(last_message, "tool_calls", None):
return "tools"
return END
- •Build the graph, compile it, and run it with a user question. This is the part that turns your agent into an actual loop: model -> tool -> model.
from langgraph.graph import StateGraph, START
graph = StateGraph(State)
graph.add_node("agent", agent)
graph.add_node("tools", tool_node)
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_call_tool)
graph.add_edge("tools", "agent")
app = graph.compile()
result = app.invoke(
{"messages": [("user", "What's the weather in Nairobi?")]}
)
for message in result["messages"]:
print(f"{message.type}: {message.content}")
- •If you want cleaner output for beginners, print only the final assistant response after the tool round-trip. This makes it easier to confirm that tool use happened without reading every internal message.
final_message = result["messages"][-1]
print("\nFinal answer:")
print(final_message.content)
Testing It
Run the script and ask for a city that exists in your lookup table, like Nairobi or London. You should see at least one assistant message with a tool call behind it, followed by a final assistant answer that uses the returned weather string.
Then test a city that is not in the map. The tool should still be called if the model decides it needs fresh data, but it will return your fallback string instead of crashing.
If you want to verify routing behavior more explicitly, inspect message.tool_calls on intermediate assistant messages. That tells you whether LangGraph sent execution into ToolNode or ended immediately.
Next Steps
- •Add multiple tools and let the model choose between them with
bind_tools() - •Replace the hardcoded lookup with real APIs or internal services
- •Add memory or persistence so conversations survive across runs
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit