What is tool use in AI Agents? A Guide for compliance officers in wealth management
Tool use in AI agents is the ability of an AI system to call external tools, such as search, calculators, databases, APIs, or workflow systems, to complete a task. In practice, it means the agent does not just generate text; it can take actions in other systems to gather facts, verify data, and execute approved steps.
How It Works
Think of an AI agent as a junior analyst with a checklist and access to firm systems. It can read a request, decide it needs more information, use the right tool to get that information, then continue with the next step.
A simple flow looks like this:
- •A user asks: “Can this client be onboarded under our policy?”
- •The agent reviews the request and identifies missing data.
- •It calls tools:
- •CRM to fetch client profile
- •KYC/AML screening service
- •Policy knowledge base
- •Case management system
- •It combines the results and returns a response or drafts a recommendation.
The key point for compliance is that the model itself is not “guessing” everything from memory. It is orchestrating actions across controlled systems.
A useful analogy is a compliance officer working with a file room and a calculator. The officer does not memorize every policy clause or manually compute exposure ratios from scratch. They consult the right source, verify the numbers, then make a judgment. Tool use gives an AI agent that same pattern: retrieve, verify, act.
For wealth management, this matters because decisions often depend on current state:
- •Current holdings
- •Client risk profile
- •Product eligibility rules
- •Jurisdiction-specific restrictions
- •Sanctions and watchlist status
Without tools, an AI assistant can only talk about these things in general terms. With tools, it can check live records and produce responses grounded in actual firm data.
From an engineering perspective, tool use usually has three parts:
| Part | What it does | Compliance angle |
|---|---|---|
| Tool selection | Agent decides which system to call | Limits what data/actions are accessible |
| Tool execution | System performs lookup or action | Creates audit trail and control point |
| Result handling | Agent uses returned data in its answer | Needs validation before output |
The important control question is not “Can the model think?” It is “Which tools can it call, under what conditions, and how are those calls logged?”
Why It Matters
- •
Reduces hallucination risk
- •An agent that uses approved tools can base answers on live policy documents or client records instead of making up details.
- •
Supports auditability
- •Tool calls can be logged with timestamps, inputs, outputs, and user context. That gives compliance teams something concrete to review.
- •
Enables least-privilege design
- •You can restrict an agent to read-only tools for advisory workflows and reserve write actions for tightly controlled cases.
- •
Improves policy consistency
- •The same tool-backed workflow checks the same rules every time, which reduces variation across teams and channels.
- •
Creates new control points
- •Each tool boundary is a place to enforce approvals, redaction, segmentation by jurisdiction, or human review triggers.
For compliance officers in wealth management, this is where AI becomes operationally relevant. The risk is no longer just bad wording in a chatbot response. The risk includes unauthorized access, incorrect suitability guidance, bad record updates, and weak supervision if tool access is not governed properly.
Real Example
A private bank wants an internal AI assistant to help relationship managers prepare client review notes before an annual meeting.
The agent receives this prompt: “Prepare a summary for Client X’s review meeting and flag any suitability issues.”
Here is how tool use works in practice:
- •
The agent calls the CRM tool to pull:
- •Client segment
- •Risk tolerance
- •Investment objectives
- •Last review date
- •
It calls the portfolio system to retrieve:
- •Current asset allocation
- •Concentration by issuer and sector
- •Recent trades
- •
It calls the policy engine to check:
- •Whether current holdings exceed concentration thresholds
- •Whether any product is outside documented suitability parameters
- •
It calls the surveillance/case system to see:
- •Open exceptions
- •Prior remediation items
- •Required follow-ups
- •
It drafts a summary for human review:
- •“Client X remains within stated risk profile.”
- •“Equity concentration exceeds internal guideline by 4%.”
- •“Recommend RM review rationale before meeting.”
This is materially different from asking a general-purpose model to “summarize the account.” The agent is using controlled systems to assemble evidence before drafting output.
For compliance teams, the guardrails should be explicit:
- •Read-only access by default
- •No direct trade submission from the agent unless separately approved
- •Mandatory logging of every tool call
- •Human sign-off before client-facing or regulatory-impacting actions
- •Data minimization so the agent only sees what it needs
That setup turns tool use into a supervised workflow rather than an autonomous decision-maker.
Related Concepts
- •
Function calling
- •The technical mechanism many models use to invoke tools with structured inputs.
- •
Agent orchestration
- •How an AI system decides when to think, retrieve data, call tools, or ask for human input.
- •
Retrieval-Augmented Generation (RAG)
- •A pattern where the model retrieves documents before answering; related to tool use but usually focused on document retrieval rather than action execution.
- •
Human-in-the-loop controls
- •Review gates where staff approve high-risk outputs or actions before they proceed.
- •
Audit logging and model governance
- •The policies and records needed to show who asked for what, what tools were called, what data was used, and what action followed.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit