LangChain Tutorial (Python): adding tool use for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-tool-use-for-intermediate-developerspython

This tutorial shows you how to add tool use to a LangChain Python agent so it can call external functions instead of guessing. You need this when your model has to fetch live data, do deterministic calculations, or trigger business logic that should not be left to free-form text generation.

What You'll Need

  • Python 3.10+
  • A virtual environment
  • langchain
  • langchain-openai
  • openai API key
  • Basic familiarity with LangChain chat models and prompts
  • Internet access for installing packages and calling the model

Install the dependencies:

pip install langchain langchain-openai openai

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a normal chat model and a real Python function you want the model to call. Keep the tool small and deterministic; if it can fail, make the failure explicit.
import os
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def calculate_premium(age: int, smoker: bool) -> str:
    base = 120.0
    age_factor = max(0, age - 30) * 3.5
    smoker_surcharge = 85.0 if smoker else 0.0
    total = base + age_factor + smoker_surcharge
    return f"Monthly premium: ${total:.2f}"
  1. Turn that function into a LangChain tool using @tool. This gives the model a structured interface instead of asking it to invent arguments from plain text.
from langchain_core.tools import tool

@tool
def calculate_premium(age: int, smoker: bool) -> str:
    """Calculate an insurance premium from age and smoking status."""
    base = 120.0
    age_factor = max(0, age - 30) * 3.5
    smoker_surcharge = 85.0 if smoker else 0.0
    total = base + age_factor + smoker_surcharge
    return f"Monthly premium: ${total:.2f}"

tools = [calculate_premium]
  1. Bind the tool to the model. Without this step, the model can mention tools in text but cannot actually emit structured tool calls.
from langchain_core.messages import HumanMessage

llm_with_tools = llm.bind_tools(tools)

messages = [
    HumanMessage(content="What would be the monthly premium for a 45-year-old smoker?")
]

response = llm_with_tools.invoke(messages)
print(response.tool_calls)
print(response.content)
  1. Execute any tool calls returned by the model, then send the result back to the model for a final answer. This is the core loop: model decides, your code runs the tool, then the model formats the response.
from langchain_core.messages import ToolMessage

messages.append(response)

for tool_call in response.tool_calls:
    if tool_call["name"] == "calculate_premium":
        result = calculate_premium.invoke(tool_call["args"])
        messages.append(
            ToolMessage(
                content=result,
                tool_call_id=tool_call["id"]
            )
        )

final_response = llm_with_tools.invoke(messages)
print(final_response.content)
  1. Wrap that loop into a reusable helper so you can use it in a service or API endpoint. In production, this is where you add logging, retries, timeouts, and guardrails around each tool execution.
def run_tool_agent(user_input: str) -> str:
    messages = [HumanMessage(content=user_input)]
    first_response = llm_with_tools.invoke(messages)

    messages.append(first_response)

    for tool_call in first_response.tool_calls:
        if tool_call["name"] == "calculate_premium":
            result = calculate_premium.invoke(tool_call["args"])
            messages.append(ToolMessage(content=result, tool_call_id=tool_call["id"]))

    final_response = llm_with_tools.invoke(messages)
    return final_response.content

print(run_tool_agent("What would be the monthly premium for a 45-year-old smoker?"))

Testing It

Run the script and check two things: first, that response.tool_calls contains a call to calculate_premium, and second, that the final response includes the computed premium rather than a made-up number. Try changing inputs like age and smoker status to confirm the output changes deterministically.

If you want to validate behavior more aggressively, test edge cases such as missing values or non-numeric ages by adding input validation inside the tool. In production systems, this is where you prevent bad tool arguments from becoming bad downstream actions.

Next Steps

  • Add multiple tools and let the model choose between them with bind_tools()
  • Move from manual loops to LangGraph for multi-step agent workflows
  • Add schema validation and error handling around every external tool call

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides