What is function calling in AI Agents? A Guide for compliance officers in retail banking

By Cyprian AaronsUpdated 2026-04-21
function-callingcompliance-officers-in-retail-bankingfunction-calling-retail-banking

Function calling in AI agents is the mechanism that lets the model request a specific external action, such as looking up a customer record, checking policy rules, or creating a case. It turns a language model from a text generator into a system that can trigger approved functions with structured inputs and receive structured outputs.

In retail banking, that means the agent does not just say “I think this looks suspicious.” It can call a sanctioned function like check_transaction_limits, verify_customer_status, or open_compliance_case, then use the result to continue the workflow.

How It Works

Think of function calling like a bank teller using a checklist and internal systems instead of guessing.

A customer asks for something complex, such as: “Can I increase my daily transfer limit?” The AI agent reads the request, decides it needs more than language generation, and selects an approved function. That function might send back fields like current limit, account age, KYC status, or whether manual review is required.

The important part is this: the model does not directly invent the answer. It asks for data or action through a controlled interface.

A typical flow looks like this:

  • Customer message comes in
  • The AI agent interprets intent
  • The agent chooses from pre-approved functions
  • It passes structured arguments, such as account ID or request type
  • The backend system executes the function
  • The result returns to the agent
  • The agent responds using that result

For compliance teams, this matters because the agent is constrained by what it is allowed to call. You are not letting free-form text drive business actions. You are exposing named functions with defined inputs, outputs, and guardrails.

Here is the everyday analogy: imagine a branch manager who cannot just walk into every vault and system. They have access cards for specific doors. Function calling is the digital version of those access cards.

A good implementation also logs every step:

  • Which function was called
  • What input was passed
  • What system responded
  • Whether human review was required
  • What final response was shown to the customer

That audit trail is where compliance gets real value.

Why It Matters

Compliance officers in retail banking should care because function calling changes how AI interacts with regulated processes.

  • It reduces hallucination risk

    The model is less likely to make up policy details when it must query approved systems instead of guessing.

  • It creates an audit trail

    Every tool call can be logged with timestamps, inputs, outputs, and decision points. That helps during reviews, complaints handling, and regulator inquiries.

  • It supports policy enforcement

    You can prevent the agent from taking actions outside approved thresholds. For example, it can check whether a transaction requires escalation before responding.

  • It improves segregation of duties

    The model can prepare or recommend actions without being able to execute them directly. Human approval can remain in place where needed.

For banks, this is especially useful in areas like fraud triage, KYC refreshes, payment disputes, sanctions screening workflows, and complaint routing. In all of those cases, the AI should assist decision-making without becoming an uncontrolled decision-maker.

Real Example

Let’s say a retail banking customer contacts support through chat:

“I need to send £12,000 to my new beneficiary today.”

The AI agent should not answer from memory. It should call functions in sequence:

  1. get_customer_profile(customer_id)
  2. check_transfer_limit(account_id)
  3. check_new_payee_age(beneficiary_id)
  4. run_fraud_risk_assessment(transaction_details)

The backend might return:

  • Customer has been onboarded for 3 months
  • Daily transfer limit is £5,000
  • New beneficiary was added 2 hours ago
  • Fraud risk score is high

At that point, the AI agent can produce a compliant response:

“Your request exceeds your current transfer limit and meets criteria for additional review because the beneficiary was recently added. I’ve created a case for manual assessment.”

That response is better than a generic refusal because it is grounded in system data and follows policy logic. More importantly for compliance, the agent did not invent a rule or bypass controls.

A simple implementation pattern looks like this:

{
  "function": "run_fraud_risk_assessment",
  "arguments": {
    "customer_id": "12345",
    "amount": 12000,
    "currency": "GBP",
    "beneficiary_age_hours": 2,
    "channel": "chat"
  }
}

And the returned result might be:

{
  "risk_score": 87,
  "decision": "manual_review_required",
  "reason_codes": ["new_beneficiary", "high_value_transfer", "recent_login_change"]
}

The compliance value here is straightforward:

  • The input is explicit
  • The decision logic is traceable
  • The output can be reviewed later
  • A human can override if needed

Related Concepts

Function calling sits next to several other ideas you’ll hear in AI governance discussions:

  • Tool use

    Broader term for models interacting with external systems such as databases, APIs, calculators, or case management platforms.

  • Structured outputs

    Responses formatted as JSON or schema-based data so downstream systems can process them reliably.

  • Agent orchestration

    The control layer that decides which tool gets called first, what happens next, and when escalation occurs.

  • Human-in-the-loop

    A control pattern where people approve sensitive actions before anything customer-facing happens.

  • Policy engines

    Systems that evaluate rules like thresholds, jurisdiction constraints, product eligibility, and escalation requirements before execution.

If you are assessing an AI agent for retail banking use cases, ask one question first: what functions can it call? That answer tells you far more about its real risk than any marketing description ever will.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides