LangChain Tutorial (Python): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-tool-use-for-advanced-developerspython

This tutorial shows you how to add tool use to a LangChain Python agent so it can call external functions instead of guessing. You need this when your model must fetch live data, hit internal APIs, or run deterministic business logic before answering.

What You'll Need

  • Python 3.10+
  • A virtual environment
  • langchain
  • langchain-openai
  • openai
  • An OpenAI API key in OPENAI_API_KEY
  • Optional: python-dotenv if you want to load env vars from a .env file

Install the packages:

pip install langchain langchain-openai openai python-dotenv

Step-by-Step

  1. Start with a clean setup and define one real tool.
    In production, tools should be boring and deterministic: take typed inputs, return plain strings, and fail loudly when something is wrong.
import os
from datetime import datetime
from dotenv import load_dotenv

load_dotenv()

def get_market_close_time(exchange: str) -> str:
    closes = {
        "NYSE": "16:00 ET",
        "NASDAQ": "16:00 ET",
        "LSE": "16:30 GMT",
    }
    return closes.get(exchange.upper(), "Unknown exchange")

print(get_market_close_time("NYSE"))
  1. Wrap the function as a LangChain tool.
    The @tool decorator gives LangChain the metadata it needs to decide when to call your function and how to pass arguments.
from langchain_core.tools import tool

@tool
def get_market_close_time_tool(exchange: str) -> str:
    """Return the regular market close time for a stock exchange."""
    closes = {
        "NYSE": "16:00 ET",
        "NASDAQ": "16:00 ET",
        "LSE": "16:30 GMT",
    }
    return closes.get(exchange.upper(), "Unknown exchange")

print(get_market_close_time_tool.name)
print(get_market_close_time_tool.description)
  1. Build a chat model that supports tool calling and bind the tool to it.
    This is the key step: the model can now choose between answering directly or calling your function first.
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_market_close_time_tool]
llm_with_tools = llm.bind_tools(tools)

response = llm_with_tools.invoke(
    "What time does NYSE close? Use the tool."
)
print(response)
  1. Add an agent loop so the model can call tools and then produce a final answer.
    Tool binding alone returns a tool call request; an agent executor handles the round trip between model, tool, and final response.
from langchain import hub
from langchain.agents import create_openai_tools_agent, AgentExecutor

prompt = hub.pull("hwchase17/openai-tools-agent")
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = agent_executor.invoke(
    {"input": "What time does NASDAQ close? Answer in one sentence."}
)
print(result["output"])
  1. Make the tool safer for real workloads.
    Advanced developers should validate inputs and keep side effects out of tools unless you explicitly need them for workflow automation.
from pydantic import BaseModel, Field
from langchain_core.tools import StructuredTool

class ExchangeInput(BaseModel):
    exchange: str = Field(..., description="Stock exchange name like NYSE or NASDAQ")

def get_close_time(exchange: str) -> str:
    closes = {
        "NYSE": "16:00 ET",
        "NASDAQ": "16:00 ET",
        "LSE": "16:30 GMT",
    }
    value = closes.get(exchange.upper())
    if value is None:
        return f"Unsupported exchange: {exchange}"
    return value

safe_tool = StructuredTool.from_function(
    func=get_close_time,
    name="get_close_time",
    description="Get the regular market close time for an exchange.",
    args_schema=ExchangeInput,
)

Testing It

Run the script and ask questions that clearly require external knowledge or deterministic lookup, like “What time does LSE close?” or “What time does NASDAQ close?”. With verbose=True, you should see the agent decide to call the tool before producing the final response.

If it answers without calling the tool, your prompt may not be strong enough or your query may be too easy to answer from prior knowledge. If you want stricter behavior, phrase requests as “Use the tool” or move toward structured routing where only certain intents are allowed to reach free-form generation.

You should also test failure paths by passing an unsupported exchange like TSX. In production, that’s where you decide whether to return a fallback message, raise an exception, or route to another tool.

Next Steps

  • Add multiple tools and let the agent choose between them based on intent.
  • Move from simple string-returning tools to typed tools with Pydantic schemas.
  • Add observability with LangSmith so you can inspect tool calls, latency, and failure rates.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides