What is function calling in AI Agents? A Guide for product managers in wealth management
Function calling is the ability for an AI agent to choose and invoke a specific software function, API, or tool to complete a task instead of only generating text. In practice, it lets the model say, “I need portfolio data,” then trigger the right system action to fetch it, calculate it, or update it.
How It Works
Think of function calling like a wealth manager handing work to the right specialist.
A product manager might know the client needs a risk profile update, but they do not personally recalculate the portfolio. They route the request to compliance, operations, or an investment engine. Function calling works the same way: the AI agent decides which tool to use, passes structured inputs, and waits for the result before responding.
The flow is usually:
- •The user asks something natural-language based
- •The model interprets intent
- •The model selects a predefined function
- •The system sends structured arguments to that function
- •The function returns data or completes an action
- •The model turns that result into a response
A simple example:
{
"name": "get_client_portfolio",
"arguments": {
"client_id": "C12345",
"as_of_date": "2026-04-21"
}
}
Instead of hallucinating a portfolio value, the agent calls get_client_portfolio, gets real data from your core systems or data warehouse, then explains it in plain language.
For wealth management teams, this matters because an AI agent is not just chatting. It becomes an orchestration layer that can:
- •Pull account balances from trusted systems
- •Check suitability rules
- •Generate meeting prep summaries
- •Trigger workflows like alerts or case creation
An everyday analogy: think of a concierge at a private bank. The concierge does not personally book every flight, move every asset, or draft every document. They know which desk to call and what information to provide. Function calling gives the AI that same coordination ability.
Why It Matters
Product managers in wealth management should care because function calling changes what AI can safely do in production.
- •
It reduces bad answers
The model does not have to guess facts like AUM, holdings, or fee schedules. It can retrieve them from source systems.
- •
It enables real workflows
An agent can move beyond Q&A into actions: schedule reviews, create tasks, flag exceptions, or start approval flows.
- •
It improves control and auditability
Functions are predefined. That means you can log what was called, with what inputs, and what happened next.
- •
It helps with compliance
You can force policy checks before any client-facing response is produced. That is important for suitability, disclosures, and regulated advice boundaries.
For PMs, the key shift is this: you are no longer designing a chatbot. You are designing a decisioning and execution layer around trusted enterprise functions.
Real Example
Imagine a relationship manager asks an internal AI agent:
“Can I tell this client they qualify for our premium advisory tier?”
A good agent should not answer from memory. It should call functions in sequence.
First, it checks eligibility:
{
"name": "check_advisory_tier_eligibility",
"arguments": {
"client_id": "C12345"
}
}
Then it retrieves relevant account data:
{
"name": "get_assets_under_management",
"arguments": {
"client_id": "C12345"
}
}
Then it applies policy logic:
{
"name": "evaluate_suitability_rules",
"arguments": {
"client_id": "C12345",
"product_code": "PREMIUM_ADVISORY"
}
}
The output might be:
{
"eligible": true,
"reason": [
"AUM above threshold",
"KYC current",
"risk profile matches"
]
}
Now the agent can respond with something useful and controlled:
“Yes. Based on current AUM, active KYC status, and suitability checks, the client qualifies for premium advisory tier. I can draft the outreach note if you want.”
That is function calling in production terms: the model interprets intent, the system executes verified business logic, and the final response reflects actual system state.
This pattern is especially useful in banking and insurance because it separates three things cleanly:
| Layer | Responsibility | Example |
|---|---|---|
| Model | Understands intent and chooses actions | “Check eligibility” |
| Function | Executes deterministic business logic | evaluate_suitability_rules() |
| Policy/Controls | Enforces compliance and logging | Approval gates, audit trail |
That separation keeps your product safer than letting the model invent answers directly.
Related Concepts
- •
Tool use
Broader term for letting agents call external systems like search APIs, databases, calculators, or workflow engines.
- •
Structured outputs
The format that makes function arguments machine-readable instead of free-form text.
- •
Agent orchestration
The logic that decides which tool to call next and in what order.
- •
RAG (Retrieval-Augmented Generation)
Useful when the agent needs documents or policies rather than executing actions.
- •
Guardrails
Rules that restrict when functions can run and what responses are allowed in regulated environments.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit