What is state machines in AI Agents? A Guide for compliance officers in banking
State machines are a way to model an AI agent as a set of defined states, with rules that control when it can move from one state to another. In banking, they help ensure an AI agent only takes approved actions, in the right order, with clear checkpoints for compliance.
How It Works
Think of a state machine like a bank’s loan approval workflow or an ATM transaction flow.
The system is always in one state:
- •
Idle - •
Collecting Documents - •
Waiting for Review - •
Approved - •
Rejected - •
Escalated to Human
It moves between states only when a specific condition is met. For example:
- •If the customer uploads all required documents, move from
Collecting DocumentstoWaiting for Review - •If AML screening flags a match, move to
Escalated to Human - •If the reviewer approves, move to
Approved
That is the core idea: the agent cannot “freestyle.” It can only do what the current state allows.
A useful analogy is airport security. You do not go from check-in straight to boarding without passing through document verification, screening, and gate control. Each checkpoint has rules. A state machine does the same thing for an AI agent.
For compliance teams, this matters because it creates:
- •predictable behavior
- •audit-friendly workflows
- •controlled handoffs between automation and humans
In practice, a state machine is often implemented as:
- •a list of states
- •a list of allowed transitions
- •conditions for each transition
- •actions that run when entering or leaving a state
Here is a simple version:
Idle -> Collecting KYC Data -> Screening -> Human Review -> Approved
\-> Rejected
If the agent receives incomplete data, it stays in Collecting KYC Data.
If sanctions screening returns a hit, it moves to Human Review.
If everything passes, it can proceed to Approved.
That structure prevents common failure modes in AI agents:
- •skipping required checks
- •repeating actions indefinitely
- •making unauthorized decisions
- •mixing up automated steps and manual approvals
Why It Matters
Compliance officers should care because state machines give you control over how an AI agent behaves.
- •
They enforce process discipline
The agent follows an approved sequence instead of choosing its own path. - •
They support auditability
Every transition can be logged: who triggered it, why it happened, and what evidence was used. - •
They reduce operational risk
The agent cannot bypass mandatory controls like KYC checks, sanctions screening, or human review thresholds. - •
They make policy easier to encode
Rules like “escalate if confidence is below 85%” or “pause if adverse media is detected” map cleanly into transitions.
For banks, this is especially useful when AI agents touch regulated workflows such as:
- •onboarding
- •fraud triage
- •claims handling
- •complaints management
- •lending decisions
A state machine gives you a defensible operating model. If regulators ask how the system prevented an unsafe action, you can point to the explicit transition rules instead of vague model behavior.
Real Example
Consider an AI agent helping with retail customer onboarding for a bank.
The goal is to collect documents, run checks, and either approve the account or escalate it.
States
| State | Purpose |
|---|---|
Start | Customer begins onboarding |
Collecting Info | Agent requests ID and address proof |
KYC Check | Agent validates identity data |
AML Screening | Agent checks sanctions and watchlists |
Risk Scoring | Agent assigns onboarding risk |
Human Review | Compliance analyst reviews exceptions |
Approved | Account can be opened |
Rejected | Application denied |
Transition rules
| From | To | Condition |
|---|---|---|
Start | Collecting Info | Customer starts application |
Collecting Info | KYC Check | Required documents received |
KYC Check | AML Screening | Identity verified |
AML Screening | Risk Scoring | No sanctions hit found |
Risk Scoring | Approved | Risk score below threshold |
Risk Scoring | Human Review | Risk score above threshold or missing data |
| Any active state | Human Review | Potential match on watchlist |
Human Review | Approved / Rejected | Analyst decision recorded |
What this achieves
If the customer uploads only one document, the agent stays in Collecting Info.
If AML screening finds a possible match, the agent stops and escalates.
If all checks pass and risk is acceptable, the account proceeds automatically.
This is better than letting an LLM “decide” end-to-end because:
- •each step is explicit
- •exceptions are contained
- •approvals are traceable
- •human intervention happens at defined points
For compliance teams, that means less ambiguity during model governance reviews and easier evidence collection for audits.
Related Concepts
These topics sit close to state machines in AI agent design:
- •
Workflow orchestration
Coordinates multiple steps across systems and teams. State machines are often the control layer inside a workflow. - •
Finite State Machines (FSMs)
The classic computer science version of state machines with a fixed set of states and transitions. - •
Guardrails
Policy checks that block unsafe actions. In many designs, guardrails trigger transitions into safe states like escalation or rejection. - •
Human-in-the-loop review
A required manual checkpoint where an employee approves exceptions or high-risk cases. - •
Agent observability
Logging traces, decisions, inputs, outputs, and transitions so compliance and engineering can reconstruct what happened later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit