What is state machines in AI Agents? A Guide for compliance officers in fintech
State machines are a way to model an AI agent as a set of defined states, with rules that control when it can move from one state to another. In AI agents, a state machine makes the agent’s behavior predictable by limiting what it can do at each step and what must happen before it changes course.
How It Works
Think of a state machine like a bank’s case-handling workflow.
A customer dispute does not jump randomly from “received” to “resolved.” It moves through known states:
- •
New - •
Under Review - •
Waiting for Customer - •
Approved - •
Rejected - •
Closed
Each transition has conditions. For example:
- •If the claim is missing documents, move to
Waiting for Customer - •If the evidence is sufficient and policy rules pass, move to
Approved - •If the case is outside policy, move to
Rejected
An AI agent works the same way when you wrap it in a state machine. Instead of letting the model improvise every next step, you define what state it is in and what actions are allowed.
A simple mental model:
| Concept | Everyday analogy | In an AI agent |
|---|---|---|
| State | A traffic light color | The agent’s current mode |
| Transition | Light changes from red to green | The agent moves to the next step |
| Guard condition | “Only open if ID is valid” | Rules that must be true before moving |
| Action | Pressing the button / opening the gate | What the agent does in that state |
For compliance teams, this matters because an AI agent should not be treated like a free-form chatbot. It should behave more like a controlled workflow engine with an LLM inside certain steps.
Example:
- •In
Intake, the agent reads the customer request - •In
Risk Check, it checks for sanctions, KYC gaps, or unusual behavior - •In
Decision Draft, it prepares a recommendation - •In
Human Review, it waits for approval if risk is above threshold - •In
Execute, it performs only approved actions
The key point: the model can generate text, but the state machine decides what happens next.
Why It Matters
Compliance officers in fintech should care because state machines create control around AI behavior.
- •
Auditability
- •Every action can be tied to a known state and transition.
- •That gives you a cleaner audit trail than an unconstrained agent that “just responded.”
- •
Policy enforcement
- •You can require checks before sensitive actions.
- •Example: no payment release unless sanctions screening and approval both pass.
- •
Reduced operational risk
- •State machines prevent agents from skipping steps.
- •That lowers the chance of unauthorized decisions, broken workflows, or inconsistent handling.
- •
Clear human oversight
- •You can force escalation into a review state when confidence is low or risk is high.
- •That helps align with internal controls and model governance.
Here’s the practical compliance angle: if you can map each AI action to an approved state transition, you can document it, test it, and monitor it.
Real Example
Let’s use a banking fraud investigation workflow.
A customer reports an unauthorized card transaction. An AI agent helps triage the case, but it runs inside a state machine.
States
- •
Case Opened - •
Identity Verified - •
Transaction Analyzed - •
Fraud Likely - •
Fraud Unclear - •
Escalated to Analyst - •
Customer Notified - •
Case Closed
Flow
- •The case enters
Case Opened. - •The agent verifies identity before doing anything else.
- •It analyzes transaction patterns, device signals, and prior disputes.
- •If rules indicate high fraud probability, it moves to
Fraud Likely. - •If signals conflict or confidence is low, it moves to
Fraud Unclearand thenEscalated to Analyst. - •Only after approval does it send notifications or recommend chargeback actions.
- •Finally, the case closes with logged reasoning and timestamps.
Why this setup helps
Without a state machine, an LLM might draft a customer message before identity verification finishes, or suggest freezing an account without passing required checks.
With a state machine:
- •The agent cannot notify customers until identity verification succeeds
- •The agent cannot recommend account restrictions without fraud thresholds being met
- •Every branch is visible for review by compliance and audit teams
That makes the system easier to validate against policy requirements like:
- •step-by-step approvals
- •segregation of duties
- •escalation thresholds
- •evidence retention
For engineers building this in production, the pattern usually looks like:
- •deterministic workflow engine for states and transitions
- •LLM used only inside specific states for classification or drafting
- •policy engine for hard controls
- •logging for every transition and decision input
Related Concepts
- •
Workflow orchestration
- •Broader process automation that coordinates tasks across systems.
- •
Finite state machines
- •The formal computer science version of this idea; useful when modeling exact transitions.
- •
Policy engines
- •Rule systems that decide whether an action is allowed.
- •
Human-in-the-loop review
- •A control pattern where people approve high-risk decisions before execution.
- •
Agent guardrails
- •Constraints that limit what an AI agent can say or do in regulated environments.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit