What is function calling in AI Agents? A Guide for engineering managers in wealth management

By Cyprian AaronsUpdated 2026-04-21
function-callingengineering-managers-in-wealth-managementfunction-calling-wealth-management

Function calling in AI agents is the mechanism that lets a model request a specific tool or API to do work outside the model itself. In practice, it means the agent can turn a user request into a structured action like “look up account balance,” “calculate portfolio exposure,” or “create a case,” instead of only generating text.

How It Works

Think of function calling like a private banker handing a request to an operations desk.

The banker does not personally pull every report, check every system, or execute every trade instruction. They identify what needs to happen, fill out the right form, and send it to the right team. Function calling works the same way: the AI agent decides which function to invoke, passes structured inputs, waits for the result, then uses that result to continue the conversation.

A typical flow looks like this:

  • A user asks: “Show me clients with concentrated exposure in tech and recent cash withdrawals.”
  • The model interprets the intent.
  • It selects one or more predefined functions, such as:
    • get_portfolio_holdings(client_id)
    • get_cash_movements(client_id, date_range)
    • flag_concentration_risk(holdings)
  • The application executes those functions against approved systems.
  • The results are returned to the model.
  • The model summarizes the outcome in plain language for the user.

The important detail is that the model does not invent the answer from memory when accuracy matters. It asks for data through tools you control.

For engineering managers in wealth management, this matters because most useful agent workflows are not single-turn chat. They are multi-step workflows across CRM, portfolio accounting, compliance rules, document stores, and market data services. Function calling is the bridge between natural language and those systems.

Why It Matters

  • It reduces hallucination risk

    • The agent can fetch real data instead of guessing.
    • That is critical when a client asks about holdings, fees, suitability checks, or transaction history.
  • It turns chat into workflow execution

    • An advisor-facing assistant can do more than answer questions.
    • It can open cases, draft client summaries, retrieve KYC status, or prepare meeting notes from source systems.
  • It gives engineering teams control

    • You define which tools exist and what parameters they accept.
    • That means better governance than letting a model free-form its way through internal systems.
  • It supports auditability

    • Every function call can be logged with inputs, outputs, timestamps, and user identity.
    • In regulated environments, that trail matters as much as the answer itself.
  • It improves product reliability

    • The agent becomes predictable because each action maps to an explicit backend capability.
    • That makes testing easier than trying to validate open-ended text generation alone.

Real Example

A wealth manager wants an assistant that helps prepare for client review meetings.

The advisor types:

“Summarize Jane Patel’s portfolio risk and tell me if there were any large withdrawals in the last 30 days.”

The agent should not respond from general knowledge. It should call specific functions against approved internal services.

Example tool definitions:

{
  "name": "get_client_profile",
  "parameters": {
    "client_id": "string"
  }
}
{
  "name": "get_portfolio_risk_summary",
  "parameters": {
    "client_id": "string"
  }
}
{
  "name": "get_cash_withdrawals",
  "parameters": {
    "client_id": "string",
    "days_back": "integer"
  }
}

Execution flow:

  1. The agent identifies client_id from CRM lookup.
  2. It calls get_portfolio_risk_summary.
  3. It calls get_cash_withdrawals with days_back = 30.
  4. It receives structured results from portfolio and transaction systems.
  5. It generates a concise response:

“Jane Patel’s portfolio is moderately aggressive with 18% concentration in technology names and an estimated equity beta above benchmark. There were two withdrawals in the last 30 days: $75K on March 4 and $40K on March 19.”

That response is useful because it is grounded in live systems. If your firm adds compliance logic later, you can insert another function like check_suitability_flags(client_id) without changing how users interact with the assistant.

This is where engineering managers should pay attention. Function calling lets you separate concerns cleanly:

LayerResponsibility
ModelUnderstand intent and choose actions
Tool/API layerExecute approved operations
Business rulesEnforce suitability, compliance, entitlements
Audit/loggingRecord what happened
UIPresent results to advisors or ops staff

That separation is what makes agents viable in wealth management instead of just impressive demos.

Related Concepts

  • Tool use

    • Broader term for models interacting with external systems through APIs or utilities.
  • Structured outputs

    • Forcing responses into JSON or schema-defined formats so downstream systems can trust them.
  • Retrieval-Augmented Generation (RAG)

    • Pulling documents or facts from a knowledge base before generating an answer.
  • Workflow orchestration

    • Coordinating multi-step business processes across services and human approvals.
  • Guardrails and policy enforcement

    • Rules that restrict what an agent can do, especially around client data and regulated actions.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides