How to Integrate LlamaIndex for insurance with Supabase for startups
Combining LlamaIndex for insurance with Supabase gives you a clean pattern for building insurance AI agents that can retrieve policy knowledge, persist customer context, and write back structured outcomes. For startups, this is the difference between a demo chatbot and a system that can answer claims questions, store conversation state, and support real workflows.
Prerequisites
- •Python 3.10+
- •A Supabase project with:
- •project URL
- •anon key or service role key
- •a table for agent memory or claim records
- •Access to your LlamaIndex setup for insurance:
- •installed LlamaIndex packages
- •your insurance document corpus indexed or ready to index
- •Environment variables set:
- •
SUPABASE_URL - •
SUPABASE_SERVICE_ROLE_KEY - •
OPENAI_API_KEYor another LLM provider key used by LlamaIndex
- •
- •Basic familiarity with:
- •Python async/sync calls
- •vector retrieval concepts
- •SQL tables in Supabase
Integration Steps
- •Install the dependencies.
Use the official clients for both sides: supabase for database access and llama-index components for retrieval and querying.
pip install supabase llama-index python-dotenv
If you are using a specific LlamaIndex insurance package or workflow layer in your stack, install that too. The integration pattern stays the same: LlamaIndex handles retrieval and reasoning, Supabase stores state and outputs.
- •Connect to Supabase and create a persistence table.
For an AI agent, you want a place to store conversation turns, claim summaries, and retrieved answers. Here is a simple table schema you can create in Supabase SQL editor:
create table if not exists insurance_agent_events (
id bigint generated always as identity primary key,
session_id text not null,
event_type text not null,
payload jsonb not null,
created_at timestamptz default now()
);
Now connect from Python using the Supabase client:
import os
from supabase import create_client
supabase_url = os.environ["SUPABASE_URL"]
supabase_key = os.environ["SUPABASE_SERVICE_ROLE_KEY"]
supabase = create_client(supabase_url, supabase_key)
# sanity check query
response = supabase.table("insurance_agent_events").select("*").limit(1).execute()
print(response.data)
- •Build your LlamaIndex insurance query engine.
For insurance use cases, you usually have policy PDFs, underwriting notes, claims rules, or FAQ documents. Load them into a vector index and expose a query engine.
import os
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
# Configure your model provider here if needed.
# Example assumes environment is already configured for your chosen LLM/embeddings.
docs = SimpleDirectoryReader("./insurance_docs").load_data()
index = VectorStoreIndex.from_documents(docs)
query_engine = index.as_query_engine(similarity_top_k=3)
question = "Does this policy cover water damage from burst pipes?"
answer = query_engine.query(question)
print(str(answer))
This is the core of the integration: LlamaIndex answers from your insurance corpus, while Supabase stores the interaction.
- •Write agent events into Supabase after each query.
Once the model returns an answer, persist both the question and response. That gives you auditability, analytics, and session continuity.
import json
from datetime import datetime
session_id = "session_001"
user_question = "Does this policy cover water damage from burst pipes?"
agent_answer = str(answer)
event_payload = {
"question": user_question,
"answer": agent_answer,
"source": "llamaindex_insurance_query",
"timestamp": datetime.utcnow().isoformat()
}
insert_response = supabase.table("insurance_agent_events").insert({
"session_id": session_id,
"event_type": "qa_turn",
"payload": event_payload
}).execute()
print(insert_response.data)
If you are building an actual support workflow, this table becomes your trace log. You can later replay sessions, inspect failure cases, or feed resolved claims back into product analytics.
- •Read prior context from Supabase before querying LlamaIndex.
A startup-grade agent should not answer every question in isolation. Pull recent history from Supabase and pass it into your prompt or query layer.
history_response = (
supabase.table("insurance_agent_events")
.select("payload")
.eq("session_id", session_id)
.eq("event_type", "qa_turn")
.order("created_at", desc=False)
.limit(5)
.execute()
)
history = [row["payload"] for row in history_response.data]
context_lines = []
for item in history:
context_lines.append(f"Q: {item['question']}\nA: {item['answer']}")
context_block = "\n\n".join(context_lines)
follow_up_question = f"""
Conversation history:
{context_block}
New question: Is accidental damage included under this policy?
"""
follow_up_answer = query_engine.query(follow_up_question)
print(str(follow_up_answer))
That pattern gives your agent memory without forcing everything into the prompt window blindly. Use Supabase as durable memory; use LlamaIndex as the reasoning layer over insurance documents.
Testing the Integration
Run one end-to-end test: query policy content through LlamaIndex, store it in Supabase, then read it back.
test_session_id = "test_session_123"
test_question = "What is the deductible for theft claims?"
test_answer = str(query_engine.query(test_question))
supabase.table("insurance_agent_events").insert({
"session_id": test_session_id,
"event_type": "qa_turn",
"payload": {
"question": test_question,
"answer": test_answer
}
}).execute()
rows = (
supabase.table("insurance_agent_events")
.select("session_id,event_type,payload")
.eq("session_id", test_session_id)
.execute()
)
print(rows.data[-1]["payload"]["question"])
print(rows.data[-1]["payload"]["answer"])
Expected output:
What is the deductible for theft claims?
[Answer generated from your indexed insurance documents]
If the insert succeeds and the read returns the same payload, your integration is working end to end.
Real-World Use Cases
- •
Claims triage assistant
- •Answer coverage questions from policy docs with LlamaIndex.
- •Store claim notes and decision trails in Supabase for review.
- •
Underwriting copilot
- •Retrieve underwriting guidelines from indexed documents.
- •Persist applicant context and risk flags in Supabase tables.
- •
Customer support memory layer
- •Keep conversation history across sessions.
- •Let the agent continue from prior interactions without rebuilding state every time.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit