LangGraph Tutorial (Python): building custom tools for intermediate developers
This tutorial shows how to build a LangGraph agent in Python that can call your own custom tools, not just canned demo tools. You need this when the model has to interact with internal APIs, database lookups, or business-specific logic that you control and can test.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-openai - •
langchain-core - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangGraph state, nodes, and edges
- •Optional:
python-dotenvif you want to load environment variables from a.envfile
Install the packages:
pip install langgraph langchain-openai langchain-core python-dotenv
Step-by-Step
- •Start by defining the graph state and the custom tools. The key idea is that your tool should be a normal Python function with a clear input contract, then wrapped with
@toolso LangChain can expose it to the model.
from typing import Annotated, Literal
from typing_extensions import TypedDict
from langchain_core.tools import tool
from langgraph.graph.message import add_messages
@tool
def lookup_policy_status(policy_id: str) -> str:
"""Look up a policy status by policy ID."""
fake_db = {
"POL123": "Active",
"POL456": "Lapsed",
"POL789": "Pending underwriting",
}
return fake_db.get(policy_id, "Policy not found")
@tool
def calculate_quote(age: int, coverage_amount: int) -> str:
"""Return a simple insurance quote estimate."""
base = 25
risk = 1.0 if age < 40 else 1.3
premium = round(base + (coverage_amount / 10000) * risk, 2)
return f"Estimated monthly premium: ${premium}"
- •Define the agent state and bind the tools to a chat model. This gives the model access to your functions while keeping the graph state explicit and inspectable.
import os
from langchain_openai import ChatOpenAI
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
tools = [lookup_policy_status, calculate_quote]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0).bind_tools(tools)
- •Create the node that calls the model and decides whether to use a tool. In LangGraph, this is usually just a function that takes state and returns the next assistant message.
from langchain_core.messages import SystemMessage
def assistant_node(state: AgentState):
system = SystemMessage(
content="You are an insurance assistant. Use tools when needed."
)
response = llm.invoke([system] + state["messages"])
return {"messages": [response]}
- •Add a tool execution node and route between model output and tool calls. The conditional edge checks whether the model requested any tools; if yes, LangGraph sends execution there, otherwise it ends.
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode
tool_node = ToolNode(tools)
def should_continue(state: AgentState) -> Literal["tools", END]:
last_message = state["messages"][-1]
if getattr(last_message, "tool_calls", None):
return "tools"
return END
graph_builder = StateGraph(AgentState)
graph_builder.add_node("assistant", assistant_node)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge(START, "assistant")
graph_builder.add_conditional_edges("assistant", should_continue)
graph_builder.add_edge("tools", "assistant")
app = graph_builder.compile()
- •Invoke the graph with a real user message. This example asks for both policy status and a quote so you can see tool calling in action.
from langchain_core.messages import HumanMessage
result = app.invoke(
{
"messages": [
HumanMessage(content="Check policy POL123 and estimate a quote for age 35 with coverage 50000.")
]
}
)
for message in result["messages"]:
print(f"{message.__class__.__name__}: {getattr(message, 'content', '')}")
- •If you want better production control, keep tools small and deterministic. Split business logic from LLM orchestration so each tool can be tested independently before LangGraph ever sees it.
def get_policy_payload(policy_id: str) -> dict:
policies = {
"POL123": {"status": "Active", "holder": "A. Smith"},
"POL456": {"status": "Lapsed", "holder": "J. Doe"},
}
return policies.get(policy_id, {"status": "Unknown", "holder": None})
@tool
def lookup_policy_details(policy_id: str) -> str:
"""Return structured policy details as JSON-like text."""
payload = get_policy_payload(policy_id)
return f"policy_id={policy_id}, status={payload['status']}, holder={payload['holder']}"
Testing It
Run the script and confirm you see an assistant response plus one or more tool calls in the message trace. If the model is wired correctly, it should call lookup_policy_status for POL123 instead of guessing.
Also test failure paths by asking for an unknown policy ID like POL000. The tool should return "Policy not found", and the assistant should respond based on that output rather than hallucinating details.
If you want more confidence, unit test each tool function directly without LangGraph. That keeps failures local to either your business logic or your orchestration layer.
Next Steps
- •Add structured outputs with Pydantic models so tool results are easier to validate.
- •Replace fake dictionaries with real service clients or database queries behind your tools.
- •Learn how to add memory and checkpoints so multi-turn workflows survive retries and restarts.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit