LangGraph Tutorial (Python): building custom tools for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphbuilding-custom-tools-for-advanced-developerspython

This tutorial shows you how to build a LangGraph agent in Python with custom tools, structured state, and tool routing that you can actually ship. You need this when the built-in examples stop being enough and your agent has to call internal systems, validate inputs, and keep control over multi-step execution.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-core
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with StateGraph, ToolNode, and tool calling
  • A terminal and a virtual environment

Install the packages:

pip install langgraph langchain-core langchain-openai

Step-by-Step

  1. Define a typed graph state and a custom tool.

For advanced agents, don’t pass loose dictionaries around. Use a typed state so your graph stays predictable, and build tools as normal Python functions with clear input contracts.

from typing import Annotated, TypedDict

from langchain_core.messages import BaseMessage
from langchain_core.tools import tool
from langgraph.graph.message import add_messages


class AgentState(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]


@tool
def lookup_policy_status(policy_id: str) -> str:
    """Look up a policy status by policy ID."""
    mock_db = {
        "POL123": "active",
        "POL456": "pending_review",
        "POL789": "cancelled",
    }
    return mock_db.get(policy_id, "policy_not_found")
  1. Create the model node with tool binding.

The model should know about the tools it can call. This is where LangGraph starts being useful for production work: the LLM decides when to call a tool, but your graph decides what happens next.

import os

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [lookup_policy_status]
llm_with_tools = llm.bind_tools(tools)


def agent_node(state: AgentState):
    response = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}
  1. Add a tool execution node and conditional routing.

This is the control point. If the model requests a tool call, route to the tool node; otherwise end the graph. That pattern keeps tool use explicit instead of burying it inside prompt logic.

from langgraph.prebuilt import ToolNode
from langgraph.graph import END, StateGraph
from langchain_core.messages import HumanMessage

tool_node = ToolNode(tools)


def should_continue(state: AgentState):
    last_message = state["messages"][-1]
    if getattr(last_message, "tool_calls", None):
        return "tools"
    return END


builder = StateGraph(AgentState)
builder.add_node("agent", agent_node)
builder.add_node("tools", tool_node)

builder.set_entry_point("agent")
builder.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
builder.add_edge("tools", "agent")

graph = builder.compile()
  1. Run the graph with a real user message.

Use a message that forces the model to call your custom tool. The graph will loop through the agent node, execute the tool, then return to the agent so it can produce a final answer using the tool output.

result = graph.invoke(
    {
        "messages": [
            HumanMessage(content="Check policy POL123 and tell me if it is active.")
        ]
    }
)

for message in result["messages"]:
    print(f"{message.__class__.__name__}: {message.content}")
  1. Add a second tool for more realistic workflows.

Once one custom tool works, add another one that performs validation or transforms data before calling downstream systems. In insurance or banking work, this is where you start enforcing business rules instead of trusting raw user input.

@tool
def normalize_policy_id(policy_id: str) -> str:
    """Normalize policy IDs by stripping spaces and uppercasing."""
    return policy_id.strip().upper()


tools = [lookup_policy_status, normalize_policy_id]
llm_with_tools = llm.bind_tools(tools)
tool_node = ToolNode(tools)
  1. Recompile after changing tools and test again.

LangGraph compiles the graph from your nodes and edges, so update the compiled object whenever you change tools or routing logic. That makes your agent behavior deterministic at runtime instead of mutating under load.

builder = StateGraph(AgentState)
builder.add_node("agent", agent_node)
builder.add_node("tools", tool_node)

builder.set_entry_point("agent")
builder.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
builder.add_edge("tools", "agent")

graph = builder.compile()

result = graph.invoke(
    {
        "messages": [
            HumanMessage(content="Normalize policy id ' pol456 ' then check its status.")
        ]
    }
)

print(result["messages"][-1].content)

Testing It

Start by checking whether the assistant actually emits a tool call when you ask about a known policy ID like POL123. If it does not, your prompt or model choice may not be encouraging function calling strongly enough.

Next, test an unknown ID such as POL999 and confirm that your custom tool returns policy_not_found. Then verify that the final assistant response reflects that result instead of hallucinating a status.

If you added normalization, test messy input like " pol456 " and confirm it becomes POL456 before lookup. That’s the kind of small control layer that matters in production agents.

Next Steps

  • Add structured output with Pydantic models so final answers are validated before returning them.
  • Replace mock functions with real internal APIs behind retries, timeouts, and audit logging.
  • Learn checkpointing in LangGraph so long-running workflows can resume after failures instead of restarting from scratch.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides