How to Integrate LlamaIndex for insurance with Supabase for multi-agent systems
Combining LlamaIndex for insurance with Supabase gives you a practical stack for agentic insurance workflows: one layer to reason over policy, claims, and underwriting docs, and another to persist shared state across agents. That means you can build multi-agent systems where one agent retrieves policy context, another checks claim eligibility, and a third writes decisions, audit trails, and task state into Postgres without inventing your own backend.
Prerequisites
- •Python 3.10+
- •A Supabase project with:
- •project URL
- •service role key or anon key
- •a table for agent state, for example
agent_runs
- •A LlamaIndex setup for insurance:
- •installed
llama-index - •access to your chosen LLM provider
- •insurance documents loaded into a
VectorStoreIndexor similar retriever-backed index
- •installed
- •Environment variables configured:
- •
SUPABASE_URL - •
SUPABASE_KEY - •
OPENAI_API_KEYor equivalent LLM key
- •
- •Basic familiarity with:
- •Python async/sync calls
- •SQL tables in Supabase
- •LlamaIndex retrievers and query engines
Integration Steps
- •Install the packages and initialize both clients.
pip install llama-index supabase python-dotenv
import os
from dotenv import load_dotenv
from supabase import create_client, Client
load_dotenv()
supabase_url = os.environ["SUPABASE_URL"]
supabase_key = os.environ["SUPABASE_KEY"]
supabase: Client = create_client(supabase_url, supabase_key)
- •Build the insurance knowledge index with LlamaIndex.
This example assumes you already have policy PDFs or claims docs loaded locally. The key part is that your insurance agent can query structured context from the index before writing anything to Supabase.
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
docs = SimpleDirectoryReader("./insurance_docs").load_data()
insurance_index = VectorStoreIndex.from_documents(docs)
query_engine = insurance_index.as_query_engine(similarity_top_k=3)
response = query_engine.query("What is the waiting period for outpatient surgery?")
print(response)
- •Create a Supabase table for multi-agent state.
Use Supabase as the shared memory layer between agents. A simple schema is enough to start: store run IDs, agent names, inputs, outputs, and timestamps.
create table if not exists agent_runs (
id uuid primary key default gen_random_uuid(),
conversation_id text not null,
agent_name text not null,
input_text text not null,
output_text text not null,
created_at timestamptz default now()
);
From Python, insert each agent step into that table.
def log_agent_run(conversation_id: str, agent_name: str, input_text: str, output_text: str):
result = supabase.table("agent_runs").insert({
"conversation_id": conversation_id,
"agent_name": agent_name,
"input_text": input_text,
"output_text": output_text,
}).execute()
return result
log_agent_run(
conversation_id="claim-10021",
agent_name="policy_retriever",
input_text="Check whether physiotherapy is covered.",
output_text="Physiotherapy is covered up to 10 sessions per year."
)
- •Wire the retrieval step into an agent workflow.
A common pattern is: retrieve policy evidence with LlamaIndex, then store the result in Supabase so downstream agents can reuse it without re-querying the docs.
def retrieve_policy_answer(question: str) -> str:
result = query_engine.query(question)
return str(result)
conversation_id = "claim-10021"
question = "Is physiotherapy covered under this plan?"
answer = retrieve_policy_answer(question)
log_agent_run(
conversation_id=conversation_id,
agent_name="policy_agent",
input_text=question,
output_text=answer
)
- •Read shared state back from Supabase for downstream agents.
This is where multi-agent systems get useful. One agent writes the evidence; another reads it and makes a decision or drafts a response.
def get_latest_agent_context(conversation_id: str):
result = (
supabase.table("agent_runs")
.select("*")
.eq("conversation_id", conversation_id)
.order("created_at", desc=True)
.limit(5)
.execute()
)
return result.data
context_rows = get_latest_agent_context("claim-10021")
for row in context_rows:
print(row["agent_name"], row["output_text"])
Testing the Integration
Run a single end-to-end check:
conversation_id = "test-claim-001"
question = "Does this policy cover emergency room visits?"
answer = retrieve_policy_answer(question)
log_result = log_agent_run(
conversation_id=conversation_id,
agent_name="policy_agent",
input_text=question,
output_text=answer
)
rows = get_latest_agent_context(conversation_id)
print("Inserted:", len(log_result.data) > 0)
print("Rows returned:", len(rows))
print("Latest agent:", rows[0]["agent_name"])
print("Latest answer:", rows[0]["output_text"][:120])
Expected output:
Inserted: True
Rows returned: 1
Latest agent: policy_agent
Latest answer: Emergency room visits are covered subject to deductible and prior authorization rules...
Real-World Use Cases
- •
Claims triage pipeline
One agent retrieves coverage terms from LlamaIndex, another checks claim metadata against policy rules, and a third stores the decision trail in Supabase for auditability. - •
Underwriting assistant
Use LlamaIndex to search underwriting guidelines and product docs, then persist risk notes and recommendations in Supabase so multiple agents can coordinate on the same applicant. - •
Customer service copilot
A support agent answers policy questions from indexed insurance documents while a workflow agent stores conversation state, escalation flags, and follow-up tasks in Supabase.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit