What is tool use in AI Agents? A Guide for developers in wealth management

By Cyprian AaronsUpdated 2026-04-21
tool-usedevelopers-in-wealth-managementtool-use-wealth-management

Tool use in AI agents is the ability for an agent to call external functions, APIs, databases, or services to complete a task. Instead of guessing, the agent decides when to fetch data, run a calculation, check a policy, or trigger a workflow.

How It Works

Think of an AI agent like a wealth advisor with a research desk and a trading terminal. The advisor does not memorize every market price or client restriction; it asks the right system for the right data, then uses that result to make the next decision.

In practice, tool use follows a loop:

  • The user asks for something
  • The model interprets the request
  • The agent chooses a tool
  • The tool returns structured output
  • The model uses that output to continue or finish

A simple example:

  • User: “Can this client rebalance into a 60/40 portfolio without violating their ESG constraints?”
  • Agent:
    • checks the client profile tool
    • checks holdings and current allocation
    • checks ESG policy rules
    • calculates whether the target is allowed
    • returns an answer with reasons

The key point is that the model is not acting alone. It is orchestrating tools.

For developers, this usually means defining tools as typed functions with clear inputs and outputs. Keep them narrow and deterministic.

{
  "name": "get_client_profile",
  "description": "Fetch KYC profile, risk score, and account metadata",
  "parameters": {
    "type": "object",
    "properties": {
      "client_id": { "type": "string" }
    },
    "required": ["client_id"]
  }
}

A good mental model is this: the LLM handles interpretation and reasoning, while tools handle facts and actions. That separation matters in regulated environments because you want auditability, predictable behavior, and clean system boundaries.

Why It Matters

  • Reduces hallucinations

    • The agent can verify facts against your source systems instead of inventing account balances, policy terms, or suitability answers.
  • Improves regulatory defensibility

    • Tool calls create an audit trail: what was asked, what data was retrieved, and what action was taken.
  • Makes agents actually useful

    • A chat interface without tools is mostly text generation. With tools, it can open tickets, run suitability checks, pull portfolio data, or generate client-specific summaries.
  • Keeps business logic where it belongs

    • Rules for suitability, tax treatment, approvals, and entitlement checks should live in services you control, not inside prompt text.

Real Example

A wealth management firm wants an internal assistant for relationship managers. The assistant helps answer: “Can I recommend this structured note to Client X?”

Here’s how tool use fits in:

  1. The RM asks the assistant about Client X.
  2. The agent calls get_client_profile(client_id) to retrieve:
    • risk tolerance
    • investment horizon
    • jurisdiction
    • product restrictions
  3. It calls get_product_metadata(product_id) to retrieve:
    • asset class
    • complexity rating
    • issuer risk
    • liquidity terms
  4. It calls run_suitability_check(client_profile, product_metadata) in a rules engine.
  5. It returns:
    • whether the product is suitable
    • which rule passed or failed
    • what evidence was used

Example response:

“Client X does not meet suitability requirements. Their stated risk tolerance is moderate, but the product has high complexity and limited liquidity. Rule SUIT-014 failed because products with complexity > 4 require an aggressive risk profile.”

That workflow is far better than asking the model to infer suitability from raw documents alone.

A production version would also log:

  • user identity
  • timestamp
  • tool inputs and outputs
  • rule engine version
  • final response

That gives compliance teams something they can review later.

Related Concepts

  • Function calling

    • The mechanism many LLMs use to invoke tools with structured arguments.
  • Agent orchestration

    • The control flow that decides which tool to call next and when to stop.
  • RAG (Retrieval-Augmented Generation)

    • Pulling documents or records into context before generating an answer; often used alongside tool use.
  • Workflow automation

    • Triggering downstream actions like ticket creation, approvals, notifications, or case updates.
  • Guardrails

    • Validation layers that restrict which tools can be called and what data can be returned based on role or policy.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides