LlamaIndex Tutorial (Python): building custom tools for advanced developers
This tutorial shows you how to build a custom LlamaIndex tool in Python, wrap it with metadata, and plug it into an agent that can call your code reliably. You need this when the built-in tools stop being enough and you want the model to interact with your own APIs, databases, or business logic.
What You'll Need
- •Python 3.10+
- •
llama-index - •
llama-index-llms-openai - •
openaiAPI key - •A
.envfile or exported environment variable forOPENAI_API_KEY - •Basic familiarity with LlamaIndex agents and tool calling
Install the packages:
pip install llama-index llama-index-llms-openai python-dotenv
Step-by-Step
- •Start with a clean Python file and load your OpenAI key from the environment. This keeps credentials out of source control and makes the script easy to run in local dev or CI.
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("OPENAI_API_KEY is not set")
- •Define a real function that does useful work. For this example, we’ll build a small policy lookup tool that maps a claim type to an internal handling rule.
from typing import Dict
POLICY_RULES: Dict[str, str] = {
"auto": "Route to claims desk within 1 business day.",
"home": "Request photos and assign adjuster if damage exceeds $5,000.",
"health": "Verify eligibility before approving reimbursement.",
}
def lookup_policy(claim_type: str) -> str:
claim_type = claim_type.lower().strip()
return POLICY_RULES.get(
claim_type,
f"No policy rule found for '{claim_type}'. Escalate to operations."
)
- •Wrap that function as a LlamaIndex tool with explicit metadata. The metadata matters because the agent uses it to decide when to call the tool and how to describe it to the model.
from llama_index.core.tools import FunctionTool
policy_tool = FunctionTool.from_defaults(
fn=lookup_policy,
name="lookup_policy",
description=(
"Look up the internal handling rule for a claim type such as "
"'auto', 'home', or 'health'."
),
)
- •Create an OpenAI-backed LLM and wire the tool into an agent. This is where your custom function becomes part of the model’s action space.
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent
llm = OpenAI(model="gpt-4o-mini", api_key=api_key)
agent = ReActAgent.from_tools(
tools=[policy_tool],
llm=llm,
verbose=True,
)
- •Send a query that should trigger the tool, then inspect the response. Use a prompt that clearly requires your custom logic so you can confirm the agent is actually calling your function.
response = agent.chat(
"What is the handling rule for an auto insurance claim?"
)
print(response)
- •Make the tool more production-friendly by validating input before returning results. In real systems, you want predictable failures instead of letting bad data flow into downstream workflows.
def lookup_policy_strict(claim_type: str) -> str:
allowed = {"auto", "home", "health"}
normalized = claim_type.lower().strip()
if normalized not in allowed:
raise ValueError(
f"Unsupported claim type '{claim_type}'. "
f"Allowed values: {', '.join(sorted(allowed))}"
)
return POLICY_RULES[normalized]
If you want to swap in the stricter version, rebuild the tool from that function:
strict_policy_tool = FunctionTool.from_defaults(
fn=lookup_policy_strict,
name="lookup_policy_strict",
description="Validate and look up approved insurance claim types only.",
)
Testing It
Run the script and confirm you see verbose agent output showing a tool call before the final answer. If the model answers directly without using lookup_policy, your tool description is probably too vague or your prompt is too generic.
Test three cases:
- •A known value like
auto - •Another known value like
home - •An unknown value like
travel
For the unknown case, make sure your strict version raises a clear exception or returns an escalation message, depending on which behavior you want in production.
Next Steps
- •Add structured inputs with Pydantic models instead of plain strings.
- •Wrap real internal APIs or database queries behind
FunctionTool. - •Explore multi-tool agents with routing rules for claims, billing, and customer support.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit