LangGraph Tutorial (Python): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-tool-use-for-advanced-developerspython

This tutorial shows how to add tool use to a LangGraph agent in Python, using a real tool-calling loop instead of a one-shot LLM call. You need this when your agent must fetch external data, call internal services, or make decisions based on live system state.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-openai
  • langchain-core
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with LangGraph state graphs and message passing

Install the packages:

pip install langgraph langchain-openai langchain-core

Step-by-Step

  1. Start with a graph state that stores chat messages. For tool use, the model needs access to the full conversation so it can decide whether to call a tool or answer directly.
from typing import Annotated, TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import BaseMessage

class AgentState(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]
  1. Define one or more tools as normal Python functions. Keep them deterministic and narrow in scope; this is where production agents usually fail when people make tools too broad.
from langchain_core.tools import tool

@tool
def get_policy_status(policy_id: str) -> str:
    """Return the status of an insurance policy by policy ID."""
    mock_db = {
        "POL123": "active",
        "POL456": "lapsed",
        "POL789": "pending underwriting",
    }
    return mock_db.get(policy_id, "policy not found")
  1. Bind the tool to a chat model and create the agent node. The key detail is bind_tools(): without it, the model can mention tools but cannot emit tool calls in the format LangGraph expects.
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_policy_status]
llm_with_tools = llm.bind_tools(tools)

def agent_node(state: AgentState):
    system = SystemMessage(
        content="You are a policy assistant. Use tools when you need exact policy status."
    )
    response = llm_with_tools.invoke([system] + state["messages"])
    return {"messages": [response]}
  1. Add a tool execution node that runs whatever tool the model requested. LangGraph gives you the helper ToolNode, which handles parsing tool calls and returning tool results as messages.
from langgraph.prebuilt import ToolNode, tools_condition

tool_node = ToolNode(tools)

graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("tools", tool_node)

graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", tools_condition)
graph.add_edge("tools", "agent")
app = graph.compile()
  1. Run the graph with a user question that forces tool use. If everything is wired correctly, the model will call get_policy_status, receive the result, then answer using that result.
from langchain_core.messages import HumanMessage

result = app.invoke(
    {
        "messages": [
            HumanMessage(content="What is the status of policy POL456?")
        ]
    }
)

for msg in result["messages"]:
    print(f"{msg.type}: {msg.content}")
  1. Add one more guardrail if you are building for real systems: validate inputs before they reach your backend. A good pattern is to keep tools thin and put authorization, schema checks, and retries outside the LLM boundary.
@tool
def get_claim_balance(claim_id: str) -> str:
    """Return claim balance for a claim ID."""
    if not claim_id.startswith("CLM"):
        return "invalid claim id"
    mock_claims = {"CLM001": "$1200", "CLM002": "$0"}
    return mock_claims.get(claim_id, "claim not found")

Testing It

Run the script and ask for a value that only exists in the tool’s mock data, like POL456. You should see at least one assistant message with a tool call, followed by a tool result message, then a final assistant answer.

If you get a plain natural-language guess instead of a tool call, check that you used llm.bind_tools(tools) and that your model supports function calling. If execution fails inside ToolNode, inspect the tool signature and docstring; LangChain uses both to build the schema passed to the model.

For production-style testing, verify three cases:

  • A valid ID returns the expected result
  • An invalid ID returns a controlled failure message
  • A question that does not need tools gets answered directly

Next Steps

  • Add multiple tools and let LangGraph route between them with tools_condition
  • Replace mock functions with real service clients behind retry and timeout wrappers
  • Add structured output after the final agent step so downstream systems can consume typed results

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides