What is state machines in AI Agents? A Guide for compliance officers in lending
State machines are a way to model an AI agent as a set of defined states, with explicit rules for how it can move from one state to another. In lending, they help ensure an AI agent only takes approved actions at each step of a workflow, such as collecting documents, checking eligibility, escalating exceptions, or issuing a decision.
How It Works
Think of a state machine like a loan application checklist with locked steps.
A borrower does not go from “application received” straight to “approved” unless the required conditions are met. The process moves through states such as:
- •
Received - •
KYC Pending - •
Income Verified - •
Under Review - •
Approved - •
Declined - •
Escalated
Each state has rules. For example:
- •If identity verification fails, move to
Escalated - •If income documents are missing, stay in
KYC Pending - •If all checks pass and policy thresholds are met, move to
Approved
That is the core idea: the agent is not free to improvise. It can only act according to the current state and the transition rules.
For compliance officers, this matters because it creates a controlled workflow. The AI agent can still use language models for tasks like summarizing documents or drafting messages, but the business logic stays deterministic. That means the system knows when it is allowed to ask for more information, when it must hand off to a human, and when it must stop.
A simple analogy: think of an airport security line.
- •You cannot board until you clear ID checks
- •You cannot clear ID checks until you present your document
- •If something looks wrong, you do not proceed to boarding; you get pulled aside for review
That is a state machine. Each checkpoint is a state, and each transition depends on evidence or policy.
Simple view vs engineering view
| View | What it means |
|---|---|
| Compliance view | A controlled process with documented steps and exception handling |
| Product view | A workflow that prevents the AI from skipping required checks |
| Engineering view | A finite-state model where transitions are triggered by events and guard conditions |
Why It Matters
Compliance officers in lending should care because state machines reduce ambiguity in AI-driven workflows.
- •
They make decision paths auditable
Every transition can be logged: who moved what, when, and why. - •
They prevent unauthorized shortcuts
An agent cannot jump from intake to approval without passing required checks. - •
They support policy enforcement
Rules like “manual review required above threshold X” become explicit transitions. - •
They improve exception handling
Missing documents, failed verification, and adverse findings can route into predefined escalation paths instead of ad hoc behavior.
In regulated lending environments, that structure is valuable. If you need to explain why an application was paused, declined, or escalated, the state history gives you a clean trail instead of a vague model output.
Real Example
Consider an AI agent used in a consumer lending platform to pre-screen applications before underwriting.
The workflow could look like this:
- •
Application Received
The borrower submits basic details and consent. - •
Identity Check
The agent verifies government ID and compares it with submitted data. - •
Document Review
The agent extracts income and employment data from uploaded statements or payslips. - •
Policy Evaluation
The system checks debt-to-income ratio, minimum income thresholds, and product eligibility rules. - •
Decision Routing
- •If all criteria pass: move to
Eligible for Underwriting - •If documents are incomplete: move to
Needs More Information - •If risk flags appear: move to
Manual Review - •If hard policy rules fail: move to
Declined
- •If all criteria pass: move to
Here is what that might look like in simplified pseudocode:
state = "Application Received"
if state == "Application Received" and consent_valid:
state = "Identity Check"
if state == "Identity Check" and id_verified:
state = "Document Review"
elif state == "Identity Check" and id_failed:
state = "Manual Review"
if state == "Document Review" and docs_complete:
state = "Policy Evaluation"
elif state == "Document Review" and docs_missing:
state = "Needs More Information"
if state == "Policy Evaluation" and policy_passed:
state = "Eligible for Underwriting"
elif state == "Policy Evaluation" and hard_fail:
state = "Declined"
else:
state = "Manual Review"
What matters here is not the code itself. It is the control structure.
The AI can help interpret documents or classify risk signals, but it cannot decide outside the allowed path. That means compliance teams can define where human review is mandatory, what evidence is needed before progression, and which outcomes require adverse action notices or additional disclosures.
This pattern also helps during audits. Instead of asking “what did the model feel like doing?”, auditors can inspect:
- •Current state
- •Triggering event
- •Transition rule
- •Evidence used
- •Human override if any
That is much easier to defend than a free-form agent making decisions without process boundaries.
Related Concepts
- •
Workflow orchestration
Coordinates tasks across systems and services; often uses states under the hood. - •
Guardrails
Policy constraints that limit what an AI agent can say or do at each step. - •
Human-in-the-loop review
Requires human approval at specific states before proceeding. - •
Decision trees
Similar branching logic, but less focused on persistent process states over time. - •
Audit logging
Records every transition and action for traceability and regulatory review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit