LangChain Tutorial (Python): adding tool use for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainadding-tool-use-for-beginnerspython

This tutorial shows you how to add tool use to a LangChain Python agent so it can call external functions instead of guessing. You need this when your model has to fetch data, run calculations, or trigger business logic like checking a policy status or looking up a customer record.

What You'll Need

  • Python 3.10+
  • A virtual environment
  • langchain
  • langchain-openai
  • python-dotenv
  • An OpenAI API key in your environment
  • Basic familiarity with LangChain chat models and prompts

Install the packages:

pip install langchain langchain-openai python-dotenv

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a normal chat model and a couple of Python functions you want the model to call. Tools are just regular functions wrapped in LangChain’s @tool decorator, so keep them small and deterministic.
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

@tool
def get_policy_status(policy_id: str) -> str:
    """Get the status of an insurance policy by policy ID."""
    fake_db = {
        "POL123": "Active",
        "POL456": "Lapsed",
        "POL789": "Pending underwriting",
    }
    return fake_db.get(policy_id, "Policy not found")

@tool
def calculate_premium(age: int, smoker: bool) -> str:
    """Estimate a monthly premium based on age and smoking status."""
    base = 100
    age_factor = max(age - 30, 0) * 2
    smoker_fee = 50 if smoker else 0
    return f"${base + age_factor + smoker_fee}/month"

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_policy_status, calculate_premium]
  1. Bind the tools to the model. This is the point where the LLM learns which functions are available and when it can ask to use them.
llm_with_tools = llm.bind_tools(tools)

response = llm_with_tools.invoke(
    "Check policy POL123 and tell me whether it is active."
)

print(response)
print(response.tool_calls)
  1. Add an agent loop so tool calls actually run. The model will often return a tool call first, then you execute that function and send the result back into the conversation.
from langchain_core.messages import HumanMessage, ToolMessage

messages = [HumanMessage(content="Check policy POL123 and tell me whether it is active.")]

ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)

for tool_call in ai_msg.tool_calls:
    if tool_call["name"] == "get_policy_status":
        result = get_policy_status.invoke(tool_call["args"])
    elif tool_call["name"] == "calculate_premium":
        result = calculate_premium.invoke(tool_call["args"])
    else:
        result = f"Unknown tool: {tool_call['name']}"

    messages.append(ToolMessage(content=result, tool_call_id=tool_call["id"]))

final_msg = llm_with_tools.invoke(messages)
print(final_msg.content)
  1. Wrap that pattern into a reusable function. In production, you do not want one-off notebook logic; you want a small loop that can handle multiple tool calls cleanly.
def run_agent(user_input: str):
    messages = [HumanMessage(content=user_input)]
    ai_msg = llm_with_tools.invoke(messages)
    messages.append(ai_msg)

    for tool_call in ai_msg.tool_calls:
        if tool_call["name"] == "get_policy_status":
            result = get_policy_status.invoke(tool_call["args"])
        elif tool_call["name"] == "calculate_premium":
            result = calculate_premium.invoke(tool_call["args"])
        else:
            result = f"Unknown tool: {tool_call['name']}"

        messages.append(ToolMessage(content=result, tool_call_id=tool_call["id"]))

    return llm_with_tools.invoke(messages).content

print(run_agent("What is the status of policy POL456?"))
print(run_agent("Estimate premium for age 45 and smoker true"))
  1. If you want LangChain to manage the loop for you, use an agent executor instead of wiring messages manually. This is cleaner once you have more than one or two tools.
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that uses tools when needed."),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({"input": "Check policy POL789 and estimate premium for age 40 and smoker false"})
print(result["output"])

Testing It

Run each snippet in order and confirm that the first model response includes tool_calls instead of only natural language. Then verify that your function returns real values like Active or $110/month, and that the final answer uses those values rather than inventing its own.

If you use AgentExecutor, set verbose=True so you can see the planning step and the tool execution step in the console. That makes it obvious whether LangChain is calling your function or just answering from the model’s internal knowledge.

Test both success paths and failure paths. For example, try an unknown policy ID like POL999 and confirm your code returns Policy not found instead of crashing.

Next Steps

  • Add more realistic tools backed by databases, REST APIs, or internal services.
  • Learn how to validate tool inputs with Pydantic models before execution.
  • Move from single-turn examples to multi-turn agents with memory and guardrails.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides