LangChain Tutorial (Python): building custom tools for advanced developers
This tutorial shows how to build a custom LangChain tool in Python, wire it into an agent, and make it reliable enough for real workflows. You need this when the built-in tools stop being enough and you want your agent to call internal logic, validate inputs, and return structured outputs instead of brittle text.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •
pydantic - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangChain agents and chat models
- •A terminal and a virtual environment
Install the packages:
pip install langchain langchain-openai pydantic
Step-by-Step
- •Start with a real function that does one job well. In production, your tool should be deterministic, typed, and easy to test before you wrap it for LangChain.
from datetime import datetime
from zoneinfo import ZoneInfo
def get_market_time(timezone: str) -> str:
"""Return the current time in a given timezone."""
now = datetime.now(ZoneInfo(timezone))
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
print(get_market_time("UTC"))
- •Wrap that function in a LangChain tool with a typed schema. This is the part advanced developers care about: explicit input contracts reduce agent errors and make tool calls inspectable.
from pydantic import BaseModel, Field
from langchain_core.tools import StructuredTool
class MarketTimeInput(BaseModel):
timezone: str = Field(description="IANA timezone name like UTC or America/New_York")
market_time_tool = StructuredTool.from_function(
func=get_market_time,
name="get_market_time",
description="Get the current time for a specific timezone.",
args_schema=MarketTimeInput,
)
result = market_time_tool.invoke({"timezone": "UTC"})
print(result)
- •Add a second tool that performs a more realistic business task. Here we normalize simple policy text into a structured risk label, which is the kind of pattern you use for internal ops, claims triage, or compliance workflows.
from typing import Literal
class RiskInput(BaseModel):
text: str = Field(description="Short policy or claim note")
def classify_risk(text: str) -> str:
lowered = text.lower()
if any(word in lowered for word in ["fraud", "stolen", "lawsuit"]):
return "high"
if any(word in lowered for word in ["delay", "missing", "review"]):
return "medium"
return "low"
risk_tool = StructuredTool.from_function(
func=classify_risk,
name="classify_risk",
description="Classify a short note as low, medium, or high risk.",
args_schema=RiskInput,
)
print(risk_tool.invoke({"text": "Possible fraud detected in invoice"}))
- •Put both tools behind an agent using a chat model. The agent decides when to call each tool, which is exactly what you want when the LLM needs controlled access to internal capabilities.
import os
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [market_time_tool, risk_tool]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
response = agent.invoke(
{"input": "What time is it in UTC and classify this note: possible fraud detected in invoice"}
)
print(response["output"])
- •Harden the tool layer before shipping it. Advanced teams should treat tools like API endpoints: validate inputs, keep side effects explicit, and separate pure functions from orchestration logic.
def safe_get_market_time(timezone: str) -> str:
allowed_timezones = {"UTC", "America/New_York", "Europe/London"}
if timezone not in allowed_timezones:
raise ValueError(f"Unsupported timezone: {timezone}")
return get_market_time(timezone)
safe_market_time_tool = StructuredTool.from_function(
func=safe_get_market_time,
name="safe_get_market_time",
description="Get current time for an approved timezone only.",
args_schema=MarketTimeInput,
)
print(safe_market_time_tool.invoke({"timezone": "Europe/London"}))
Testing It
Run each code block independently first so you can isolate failures. If the direct tool calls work but the agent fails, the issue is usually model configuration or missing OPENAI_API_KEY, not the tool itself.
For the agent test, ask something that clearly requires one or both tools, like “What time is it in UTC?” or “Classify this note: lawsuit filed against vendor.” With verbose=True, you should see the tool selection and arguments in the console.
If you want a stricter check, write unit tests against get_market_time and classify_risk before testing LangChain integration. That gives you deterministic coverage for business logic and leaves only orchestration behavior to validate manually.
Next Steps
- •Build tools from Pydantic schemas for multi-field inputs like claim IDs, dates, and region codes.
- •Add retry logic and exception mapping so tool failures return usable agent messages.
- •Move from
initialize_agentto LangGraph when you need explicit control over routing, state, and human approval steps.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit