What is ReAct in AI Agents? A Guide for compliance officers in wealth management
ReAct is an AI agent pattern that combines Reasoning and Acting in a loop: the model thinks about a task, takes a step, observes the result, and then decides the next step. ReAct lets an AI agent solve multi-step problems by alternating between internal reasoning and external actions like searching, calling tools, or querying systems.
How It Works
Think of ReAct like a compliance officer reviewing a complex case file.
You do not make a final decision from one document. You:
- •read the latest disclosure,
- •check the client profile,
- •compare it against policy,
- •ask for missing evidence,
- •then decide what to do next.
That is ReAct in practice.
An AI agent using ReAct follows the same pattern:
- •Reason: “What am I trying to verify?”
- •Act: “I should query the CRM, check transaction history, or search policy rules.”
- •Observe: “The client is high net worth, resident in a restricted jurisdiction, and the trade was flagged.”
- •Reason again: “I need source-of-funds evidence before this can proceed.”
This matters because the agent does not just generate a one-shot answer. It behaves more like an analyst working through a checklist.
A simple loop looks like this:
Goal -> Reason -> Action -> Observation -> Reason -> Action -> Final response
For compliance teams, the key point is control. ReAct makes the model’s work more inspectable because each step can be logged:
- •what it was trying to determine,
- •which system it queried,
- •what evidence it saw,
- •why it moved to the next step.
That gives you something closer to an auditable workflow than a black-box chat response.
Why It Matters
Compliance officers in wealth management should care because ReAct changes how AI behaves in regulated workflows.
- •
Better traceability
- •Each tool call and intermediate step can be logged.
- •That helps with audit trails, review, and post-incident analysis.
- •
Less hallucination risk
- •The agent is pushed to verify against systems of record instead of guessing.
- •That matters when decisions depend on KYC data, sanctions status, or suitability constraints.
- •
Fits policy-driven workflows
- •ReAct works well when tasks need sequential checks.
- •Example: confirm identity, check PEP/sanctions screening, review transaction rationale, then escalate if needed.
- •
Easier human oversight
- •Compliance teams can insert approval gates between steps.
- •You can require escalation before an action like account restriction or case closure.
The important nuance: ReAct is not compliance by itself. It is an execution pattern. If your policies are weak, the agent will still follow weak policies very efficiently.
Real Example
A wealth management firm wants an AI assistant to help triage suspicious activity alerts on discretionary accounts.
Scenario
A client places multiple large trades in illiquid securities shortly after a transfer from an offshore entity. The alert comes into the compliance queue.
How a ReAct agent handles it
- •
Reason
- •“I need to determine whether this activity is consistent with the client profile and AML policy.”
- •
Act
- •Query CRM for client risk rating, occupation, expected activity.
- •Pull recent trade history.
- •Check onboarding documents for source-of-funds declarations.
- •Search sanctions/PEP screening results.
- •Review policy rules for offshore funding and concentrated trading patterns.
- •
Observe
- •Client is classified as medium risk.
- •No sanctions hit.
- •Source of funds documentation exists but is stale.
- •Trading volume is 4x above expected monthly activity.
- •Jurisdiction is not prohibited, but enhanced due diligence is required under policy.
- •
Reason
- •“This does not prove wrongdoing, but it meets escalation criteria. I should prepare a case summary and request updated source-of-funds evidence.”
- •
Act
- •Draft a compliance note.
- •Attach cited evidence from internal systems.
- •Route the case to a human reviewer.
- •Do not recommend account closure or filing until reviewed by an authorized officer.
Why this is useful
The agent did not try to make a final regulatory judgment on its own. It:
- •gathered evidence,
- •applied predefined checks,
- •escalated when thresholds were met.
That is exactly how you want AI to behave in regulated operations: assistive, documented, and bounded by policy.
Related Concepts
Here are the adjacent ideas worth knowing:
- •
Tool use / function calling
- •The agent invokes external systems like CRM, screening engines, or policy databases.
- •
Chain-of-thought prompting
- •A prompting technique where models reason through steps internally; ReAct extends this by pairing reasoning with actions.
- •
Agent orchestration
- •The control layer that decides which tools are available, when approvals are needed, and how steps are sequenced.
- •
Human-in-the-loop review
- •A governance pattern where high-risk outputs require manual sign-off before action is taken.
- •
RAG (Retrieval-Augmented Generation)
- •A way to ground responses in internal documents; often combined with ReAct so the agent can retrieve facts before deciding what to do next.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit