What is tool use in AI Agents? A Guide for engineering managers in wealth management
Tool use in AI agents is the ability for an AI system to call external functions, APIs, or systems to get work done instead of only generating text. In practice, it means the agent can look up data, submit requests, run calculations, or trigger workflows inside your bank’s or insurer’s systems.
How It Works
Think of an AI agent as a relationship manager who knows the process but does not personally hold every account record in their head.
When a client asks, “What is my current portfolio exposure and can you rebalance me into a more conservative allocation?”, the agent should not guess. It should:
- •read the request
- •decide which tools are needed
- •call those tools in sequence
- •combine the results
- •return a response or take an action
That is tool use.
A simple mental model is this:
| Part | What it does |
|---|---|
| Agent | Interprets the user request and decides what to do |
| Tool | A function or API that performs one specific task |
| Orchestrator | Controls which tool runs, in what order, and with what inputs |
| System of record | The source of truth, like CRM, portfolio platform, policy admin system, or market data feed |
For wealth management, this matters because most useful actions require live data and controlled execution. An agent answering from memory is just a chatbot. An agent using tools becomes an operational assistant.
Example flow:
- •Advisor asks: “What is client X’s current cash position and pending orders?”
- •The agent calls:
- •
get_client_profile(client_id) - •
get_portfolio_positions(account_id) - •
get_open_orders(account_id)
- •
- •The agent summarizes the results.
- •If allowed, it can also call
create_rebalance_proposal()ordraft_client_email().
The key point: the model is not doing everything itself. It is choosing when to use external capabilities.
For engineering managers, the important distinction is between:
- •generation: the model writes text
- •tool use: the model takes structured action through approved interfaces
That separation gives you control. You can constrain what the agent can access, log every call, require human approval for sensitive actions, and keep regulated workflows inside existing governance.
Why It Matters
- •
Reduces hallucinations
In wealth management, stale balances or fabricated product details are unacceptable. Tool use lets the agent pull live data from trusted systems instead of inventing answers.
- •
Turns chat into workflow automation
The value is not “nice conversation.” The value is completing tasks like account lookup, suitability checks, document retrieval, and case creation with fewer manual handoffs.
- •
Supports compliance and auditability
Tool calls can be logged with inputs, outputs, timestamps, and approver IDs. That gives compliance teams something they can review instead of opaque free-text reasoning.
- •
Improves operational scale
One well-designed agent can handle repetitive advisor support tasks across many branches or teams without increasing headcount linearly.
Real Example
A private bank wants to help advisors prepare for client review meetings.
Without tool use:
- •The advisor opens CRM
- •Checks portfolio holdings in another platform
- •Pulls performance data from a reporting tool
- •Downloads notes from document storage
- •Manually drafts a meeting summary
With tool use:
- •Advisor types: “Prepare a review brief for Client 4821.”
- •The agent calls:
- •
fetch_crm_notes(4821) - •
fetch_portfolio_performance(4821) - •
fetch_cash_flow_events(4821) - •
search_documents(4821, "IPS")
- •
- •The agent compiles a brief:
- •recent life events
- •portfolio performance versus benchmark
- •upcoming cash needs
- •relevant restrictions from the investment policy statement
- •If enabled, it drafts follow-up actions:
- •schedule review call
- •generate email summary
- •create suitability checklist for advisor sign-off
This is useful because it compresses five systems into one workflow while keeping each step traceable.
From an engineering perspective, you would usually implement this with:
- •strict tool schemas
- •role-based access control
- •read-only tools first
- •human approval before write actions
- •deterministic validation on outputs before anything reaches production systems
That last part matters. In regulated environments, the agent should never directly place trades or update client records unless you have explicit controls around permissions, thresholds, and approvals.
Related Concepts
- •
Function calling
The mechanism that lets an LLM invoke structured code instead of returning plain text.
- •
Agent orchestration
The logic that decides which tool runs next and how results are combined.
- •
RAG (Retrieval-Augmented Generation)
A pattern where the model retrieves documents or records before answering.
- •
Human-in-the-loop approval
A control layer where a person reviews sensitive actions before execution.
- •
Guardrails
Policies and validation rules that restrict what tools an agent can access and how outputs are used.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit