What is tool use in AI Agents? A Guide for engineering managers in lending

By Cyprian AaronsUpdated 2026-04-21
tool-useengineering-managers-in-lendingtool-use-lending

Tool use in AI agents is the ability for an agent to call external systems, APIs, or functions to get work done instead of only generating text. In lending, that means the agent can check a borrower’s application status, pull credit policy rules, fetch document data, or create a task in your workflow system.

How It Works

Think of an AI agent as a loan operations coordinator with a desk full of phone numbers and system logins. The model does not “know” your current underwriting rules, loan status, or CRM records by memory, so it uses tools to ask the right system for the right answer.

The flow is usually:

  • The user asks something like, “Can we approve this borrower?”
  • The agent reads the request and decides what it needs to know.
  • It calls one or more tools:
    • LOS API for application status
    • Credit policy service for decision rules
    • Document extraction service for pay stubs or bank statements
    • CRM or case system for prior interactions
  • The tool returns structured data.
  • The agent uses that data to produce an answer or take the next action.

A useful analogy: tool use is like a loan officer working with a checklist and several internal systems. They do not guess whether income is verified; they look it up in the LOS, confirm it against docs, then decide what happens next. The AI agent works the same way, except the “look it up” step is automated.

For engineering managers, the important detail is this: tool use turns an LLM from a text generator into an orchestrator. The model handles reasoning and routing, while tools handle facts and side effects.

What makes tool use different from plain chat

Plain LLM chatTool-using agent
Answers from training dataFetches live data from systems
Can hallucinate missing factsGrounds responses in source systems
Cannot update recordsCan create tickets, send emails, trigger workflows
Best for explanationBest for execution

In production lending systems, this separation matters. You want the model to explain policy and summarize cases, but you want systems of record to remain the source of truth.

Why It Matters

  • Reduces manual ops load

    • Agents can handle repetitive lender workflows like status checks, document triage, and follow-up tasks.
    • That frees analysts and underwriters to focus on exceptions.
  • Improves accuracy

    • Tool calls ground responses in current LOS, CRM, and policy data.
    • That lowers the risk of stale answers about rate sheets, conditions outstanding, or approval status.
  • Makes automation auditable

    • Every tool call can be logged: what was requested, which system was queried, and what came back.
    • In lending, that audit trail matters for compliance and internal review.
  • Supports controlled rollout

    • You can start with read-only tools before allowing write actions like creating tasks or updating case notes.
    • That gives engineering teams a safe path from assistive workflows to partial automation.

For managers, this is not just an AI feature. It is an integration pattern that determines whether your agent is useful in regulated operations or just another chatbot.

Real Example

A consumer lender wants to speed up pre-underwrite review for personal loan applications.

The borrower submits an application. An AI agent receives a task: “Summarize this file and flag missing items.”

The agent uses these tools:

  • get_application(app_id) from the loan origination system
  • extract_documents(app_id) from the document platform
  • check_policy_rules(product_code) from underwriting policy service
  • create_task(queue="UW Exceptions", payload=...) in workflow management

What happens next:

  1. The agent pulls application details: income type, requested amount, DTI estimate.
  2. It checks uploaded documents and sees one bank statement is missing page 2.
  3. It queries policy rules and finds that self-employed applicants require two months of complete statements.
  4. It creates a task for underwriting with a concise summary:
    • Missing bank statement page 2
    • Self-employed income requires two complete months
    • No final decision should be made until document issue is resolved

The underwriter does not need to open three systems and reconstruct the case manually. The agent did that orchestration work.

That is tool use in practice: not “AI making decisions on its own,” but AI coordinating systems so humans get faster, better-prepared decisions.

Related Concepts

  • Function calling

    • The mechanism many LLMs use to invoke tools with structured inputs.
  • Retrieval-Augmented Generation (RAG)

    • Pulling relevant knowledge into context before generating an answer.
    • Useful for policies and procedures; different from live system actions.
  • Workflow orchestration

    • Managing multi-step processes across services like LOS, CRM, email, and task queues.
  • Guardrails

    • Rules that limit which tools an agent can call and when.
    • Important for compliance-sensitive lending workflows.
  • Human-in-the-loop

    • Requiring human approval before high-risk actions like adverse action notices or loan decision updates.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides