LangGraph Tutorial (Python): building custom tools for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphbuilding-custom-tools-for-beginnerspython

This tutorial shows you how to build a LangGraph agent in Python that can call a custom tool, pass arguments into it, and return a structured result. You need this when the model must do something deterministic — like look up policy data, calculate a premium, or fetch internal records — instead of guessing.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-core
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with Python functions and Pydantic-style tool inputs
  • A terminal and a virtual environment

Install the packages:

pip install langgraph langchain-core langchain-openai

Step-by-Step

  1. First, define a custom tool with a strict input schema. In production, this is where you wrap internal logic like database queries or business rules.
from typing import Annotated
from langchain_core.tools import tool

@tool
def calculate_quote(age: int, smoker: bool) -> str:
    """Return a simple insurance quote based on age and smoking status."""
    base = 100
    age_adjustment = max(age - 30, 0) * 2
    smoker_adjustment = 50 if smoker else 0
    total = base + age_adjustment + smoker_adjustment
    return f"Estimated monthly premium: ${total}"
  1. Next, create an LLM that knows how to call tools. The important part is binding the tool to the model so LangGraph can route tool calls correctly.
import os
from langchain_openai import ChatOpenAI

if not os.getenv("OPENAI_API_KEY"):
    raise ValueError("Set OPENAI_API_KEY in your environment")

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools([calculate_quote])
  1. Now define the graph state and the agent node. This node takes messages in, sends them to the model, and returns the next assistant message.
from typing import Annotated, TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]

def assistant_node(state: State):
    response = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}
  1. Add a tool execution node and wire the graph together. LangGraph will loop between the assistant and tools until the model stops asking for tool calls.
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition

graph_builder = StateGraph(State)

graph_builder.add_node("assistant", assistant_node)
graph_builder.add_node("tools", ToolNode([calculate_quote]))

graph_builder.add_edge(START, "assistant")
graph_builder.add_conditional_edges("assistant", tools_condition)
graph_builder.add_edge("tools", "assistant")

app = graph_builder.compile()
  1. Finally, run the graph with a user message. Ask for something that should trigger the tool so you can see the full loop in action.
from langchain_core.messages import HumanMessage

result = app.invoke(
    {
        "messages": [
            HumanMessage(content="I'm 42 years old and I smoke. Give me an insurance quote.")
        ]
    }
)

for message in result["messages"]:
    print(f"{message.__class__.__name__}: {message.content}")

Testing It

Run the script from your terminal and confirm you get at least one assistant message plus one tool call behind the scenes. If everything is wired correctly, the final assistant response should include the premium returned by calculate_quote.

Try changing the input to remove smoking status or adjust age and verify the output changes accordingly. That tells you both the tool schema and graph routing are working.

If you want to inspect the actual tool call payloads, print each message object and look for tool_calls on the assistant message before the tool node runs.

Next Steps

  • Add more tools with different schemas, such as lookup_policy or fetch_claim_status
  • Replace the toy pricing logic with a real service call or database query
  • Learn how to add memory and checkpoints so your agent can continue across multiple turns

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides