What is state machines in AI Agents? A Guide for product managers in wealth management
State machines in AI agents are a way to define the agent’s current condition, the allowed next steps, and what event moves it from one condition to another. In practice, a state machine keeps an AI agent from acting randomly by making it follow explicit states like waiting_for_user, gathering_documents, checking_risk, or ready_to_escalate.
How It Works
Think of a state machine like a wealth management onboarding checklist with gates.
A client does not move from “new lead” to “fully onboarded” in one jump. They go through stages:
- •KYC requested
- •Documents received
- •Compliance review
- •Account approved
- •Portfolio setup
- •Handoff to advisor
Each stage is a state. Each trigger, like “client uploaded passport” or “sanctions check failed,” is an event. The state machine defines which transitions are allowed and what happens next.
For AI agents, this matters because the model itself is not enough. A language model can generate text, but it does not naturally know whether it should ask for more information, call a compliance API, or stop and escalate. The state machine provides that control layer.
A simple example:
start -> collect_client_details -> validate_identity -> assess_suitability -> recommend_next_step -> end
If identity validation fails, the agent does not continue to suitability assessment. It moves to:
validate_identity -> needs_manual_review
That is the core value: predictable behavior.
For product managers, the easiest analogy is a bank loan application flow or a client service ticket system. You would never let a case skip compliance just because someone typed a convincing message. A state machine prevents that kind of shortcut.
There are two common ways teams implement this:
- •Finite State Machine (FSM): a small set of fixed states and transitions.
- •Stateful orchestration: more flexible workflows where the agent can loop, branch, retry, or escalate based on conditions.
In wealth management, FSMs work well when the process is regulated and repeatable:
- •onboarding
- •suitability checks
- •document collection
- •complaint triage
- •advisor handoff
Why It Matters
Product managers in wealth management should care because state machines reduce operational risk and improve control.
- •
They make agent behavior auditable
- •You can explain why the agent asked for a document, escalated a case, or stopped processing.
- •That matters when compliance teams ask for traceability.
- •
They prevent bad AI behavior
- •The agent cannot jump ahead or take actions outside its current stage.
- •This lowers the chance of incorrect recommendations or unauthorized actions.
- •
They improve user experience
- •Clients get consistent next steps instead of inconsistent chatbot answers.
- •That reduces drop-off in onboarding and service journeys.
- •
They make edge cases manageable
- •Failed identity checks, missing documents, or unclear risk profiles can be handled as explicit states.
- •Product teams can design recovery paths instead of relying on prompt luck.
Here is the practical product view: if your AI agent touches money movement, advice generation, account servicing, or regulated disclosures, you want deterministic flow control around it. State machines give you that control without removing the flexibility of an LLM where it actually helps.
Real Example
Let’s say you are building an AI assistant for high-net-worth client onboarding at a private bank.
The goal is to help relationship managers collect data and prepare an account opening packet. The agent should guide the process but never bypass compliance.
A possible state machine looks like this:
| State | What the agent does | Exit condition |
|---|---|---|
init | Greets client and explains required steps | Client agrees to proceed |
collect_personal_info | Requests name, address, tax residency | Required fields captured |
collect_documents | Asks for ID proof and source-of-funds docs | Files uploaded |
run_checks | Calls KYC/AML/sanctions services | Checks complete |
review_outcome | Evaluates pass/fail/manual review | Decision returned |
manual_review | Routes case to compliance officer | Officer resolves issue |
ready_for_account_opening | Prepares final packet for ops team | Packet submitted |
Now imagine the sanctions screen returns a match.
Without a state machine, an LLM-based assistant might keep chatting politely and suggest next steps that violate policy. With a state machine, the agent moves into manual_review and stops any further automated progression until compliance clears it.
That gives you three things:
- •controlled branching
- •clear handoffs between automation and humans
- •fewer regulatory surprises
This pattern also works in insurance claims:
- •claim filed
- •documents collected
- •fraud checks run
- •adjuster review
- •payout approved or denied
Same structure. Different domain.
Related Concepts
- •
Finite State Machines
- •The simplest version of state-based control logic.
- •Good for structured workflows with limited branching.
- •
Workflow Orchestration
- •Manages multi-step processes across systems and services.
- •Useful when your agent needs API calls, retries, approvals, and timeouts.
- •
Human-in-the-loop Design
- •Inserts human review at specific states.
- •Important for advice, exceptions, and compliance escalation.
- •
Guardrails
- •Rules that constrain what the model can say or do.
- •Often combined with state machines to enforce policy boundaries.
- •
Tool Calling / Function Calling
- •Lets agents invoke external systems like CRM, KYC vendors, or portfolio tools.
- •State machines decide when those tools are allowed to run.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit