How to Integrate LlamaIndex for lending with Supabase for AI agents
Combining LlamaIndex for lending with Supabase gives you a practical pattern for building loan-aware AI agents: the retrieval layer understands lending documents, while Supabase gives you a persistent backend for borrower profiles, conversation state, and audit trails. That means your agent can answer policy questions, pull the right loan terms, and store every decision context in one place.
Prerequisites
- •Python 3.10+
- •A Supabase project with:
- •
SUPABASE_URL - •
SUPABASE_SERVICE_ROLE_KEYor anon key for local testing
- •
- •A Postgres table in Supabase for agent state or lending metadata
- •Access to your lending corpus:
- •PDFs, DOCX files, policy docs, term sheets, underwriting guides
- •Installed packages:
- •
llama-index - •
llama-index-vector-stores-supabase - •
supabase - •
python-dotenv
- •
- •An LLM provider key configured for LlamaIndex, such as OpenAI
Integration Steps
- •Set up your environment and clients.
Start by wiring environment variables and initializing both SDKs. Keep Supabase as the system of record for structured data, and use LlamaIndex for document retrieval over lending content.
import os
from dotenv import load_dotenv
from supabase import create_client, Client
load_dotenv()
SUPABASE_URL = os.getenv("SUPABASE_URL")
SUPABASE_KEY = os.getenv("SUPABASE_SERVICE_ROLE_KEY")
supabase: Client = create_client(SUPABASE_URL, SUPABASE_KEY)
For LlamaIndex, configure the LLM and embedding model you want to use.
import os
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
Settings.llm = OpenAI(model="gpt-4o-mini", temperature=0)
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
- •Load lending documents into LlamaIndex.
Use a file reader to ingest your lending policy docs, then build an index. For production systems, chunking strategy matters because loan policies often contain dense clauses that need stable retrieval.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("./lending_docs").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=3)
If your corpus is large, split ingestion from query time and persist embeddings in a vector store. For a Supabase-backed setup, you can use the Supabase vector store integration.
from llama_index.vector_stores.supabase import SupabaseVectorStore
from llama_index.core import StorageContext
vector_store = SupabaseVectorStore(
postgres_connection_string=os.getenv("SUPABASE_POSTGRES_CONNECTION_STRING"),
collection_name="lending_docs"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents,
storage_context=storage_context,
)
- •Store borrower context in Supabase.
Your agent should not keep borrower state only in memory. Store application metadata, conversation state, and decision logs in Supabase so downstream services can inspect them later.
borrower_record = {
"borrower_id": "borr_001",
"full_name": "Amina Patel",
"requested_amount": 250000,
"purpose": "working capital",
"status": "pre_qualification"
}
result = supabase.table("loan_applications").insert(borrower_record).execute()
print(result.data)
You can also fetch context before answering a question.
app = (
supabase.table("loan_applications")
.select("*")
.eq("borrower_id", "borr_001")
.single()
.execute()
)
borrower_context = app.data
- •Combine retrieval from LlamaIndex with structured data from Supabase.
This is the core integration pattern. Use Supabase for borrower-specific facts and LlamaIndex for policy/document retrieval, then assemble both into one prompt for the agent.
question = "Can this borrower qualify for an unsecured working capital loan?"
retrieved_answer = query_engine.query(question)
prompt = f"""
Borrower context:
{borrower_context}
Relevant lending policy:
{retrieved_answer}
Answer the question using only the provided context.
"""
final_response = Settings.llm.complete(prompt)
print(final_response.text)
If you need stronger control, wrap this in an agent tool chain instead of calling the LLM directly. The important part is that document knowledge comes from LlamaIndex and operational truth comes from Supabase.
- •Persist the agent decision back to Supabase.
Every decision should be logged. That gives you traceability for underwriting reviews and makes it easier to debug bad answers later.
decision_log = {
"borrower_id": "borr_001",
"question": question,
"retrieved_policy_snippet": str(retrieved_answer),
"agent_decision": final_response.text,
}
log_result = supabase.table("agent_decisions").insert(decision_log).execute()
print(log_result.data)
A clean production pattern is:
| Concern | LlamaIndex | Supabase |
|---|---|---|
| Lending policy search | Yes | No |
| Borrower profile storage | No | Yes |
| Audit logging | No | Yes |
| RAG answers | Yes | Indirect |
| Transactional updates | No | Yes |
Testing the Integration
Run a simple end-to-end check: fetch borrower data from Supabase, retrieve policy text through LlamaIndex, then generate an answer.
test_question = "What are the minimum requirements for a working capital loan?"
app = (
supabase.table("loan_applications")
.select("*")
.eq("borrower_id", "borr_001")
.single()
.execute()
)
policy_answer = query_engine.query(test_question)
print("Borrower:", app.data["full_name"])
print("Policy answer:", policy_answer)
Expected output:
Borrower: Amina Patel
Policy answer: The minimum requirements include...
If that works, verify persistence too:
check_log = (
supabase.table("agent_decisions")
.select("*")
.eq("borrower_id", "borr_001")
.order("created_at", desc=True)
.limit(1)
.execute()
)
print(check_log.data[0]["agent_decision"])
Real-World Use Cases
- •
Loan pre-qualification agents
- •Pull borrower facts from Supabase.
- •Retrieve eligibility rules from LlamaIndex.
- •Return a structured pre-check before handing off to an underwriter.
- •
Policy Q&A assistants for loan ops teams
- •Let staff ask questions like “Can we waive this fee?”
- •Answer from indexed underwriting manuals while logging every interaction in Supabase.
- •
Audit-ready decision support
- •Store prompts, retrieved snippets, and final recommendations in Supabase.
- •Reconstruct why an agent recommended approval or rejection months later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit