How to Integrate Anthropic for lending with pgvector for startups
If you’re building lending workflows for a startup, you need two things: a model that can reason over borrower context, and a retrieval layer that can pull the right policy, product, and customer data fast. Anthropic handles the language and decision support; pgvector gives you semantic search over loan docs, underwriting notes, KYC summaries, and support history.
The useful pattern is simple: store lending knowledge in Postgres with pgvector, retrieve the most relevant chunks for a borrower or application, then send that context to Anthropic to draft responses, summarize risk signals, or classify applications.
Prerequisites
- •Python 3.10+
- •PostgreSQL 15+ with the
pgvectorextension installed - •A database user with permission to create tables and indexes
- •An Anthropic API key
- •The
anthropic,psycopg[binary], andpgvectorPython packages - •A text embedding strategy:
- •either Anthropic-compatible embeddings from your own pipeline
- •or a separate embedding model that produces fixed-size vectors for pgvector
- •Basic lending documents ready to index:
- •loan policies
- •product terms
- •underwriting rules
- •borrower support transcripts
Install the Python dependencies:
pip install anthropic psycopg[binary] pgvector
Integration Steps
1. Set up pgvector in Postgres
Create the extension and a table for lending knowledge chunks. Each row stores text plus its embedding vector.
import psycopg
conn = psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending")
conn.execute("CREATE EXTENSION IF NOT EXISTS vector")
conn.execute("""
CREATE TABLE IF NOT EXISTS lending_chunks (
id SERIAL PRIMARY KEY,
source ტექST NOT NULL,
chunk ტექST NOT NULL,
embedding vector(1536)
)
""")
conn.commit()
Use the vector dimension that matches your embedding model. If your embeddings are 1024 or 3072 dimensions, change vector(1536) accordingly.
2. Generate embeddings and store them in pgvector
Anthropic’s API is for generation and reasoning, not native embeddings. In production, pair it with an embedding model, then store those vectors in pgvector for retrieval.
import os
import psycopg
from pgvector.psycopg import register_vector
def embed_text(text: str) -> list[float]:
# Replace with your actual embedding provider.
# Must return a list of floats matching your vector dimension.
raise NotImplementedError
conn = psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending")
register_vector(conn)
documents = [
("policy", "Borrowers must provide two months of bank statements."),
("underwriting", "Self-employed applicants require tax returns for the last two years."),
]
with conn.cursor() as cur:
for source, chunk in documents:
embedding = embed_text(chunk)
cur.execute(
"INSERT INTO lending_chunks (source, chunk, embedding) VALUES (%s, %s, %s)",
(source, chunk, embedding),
)
conn.commit()
This is the core retrieval index. Every lending rule or customer note you want the agent to use should be chunked and embedded here.
3. Query pgvector for the most relevant lending context
When a borrower asks a question or submits an application, embed the query and retrieve nearest neighbors from Postgres.
import psycopg
from pgvector.psycopg import register_vector
def embed_text(text: str) -> list[float]:
raise NotImplementedError
query = "Can a self-employed borrower qualify without W-2 income?"
conn = psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending")
register_vector(conn)
query_embedding = embed_text(query)
sql = """
SELECT source, chunk
FROM lending_chunks
ORDER BY embedding <-> %s
LIMIT 5;
"""
with conn.cursor() as cur:
cur.execute(sql, (query_embedding,))
rows = cur.fetchall()
context = "\n".join([f"[{source}] {chunk}" for source, chunk in rows])
print(context)
The <-> operator performs vector distance search. For production systems, add an IVFFlat or HNSW index once your corpus grows.
4. Send retrieved context to Anthropic
Now pass the retrieved context into Claude so it can answer using your lending knowledge base instead of guessing.
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
prompt = f"""
You are a lending assistant.
Use only the context below to answer the user's question.
Context:
{context}
Question:
Can a self-employed borrower qualify without W-2 income?
"""
response = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=300,
messages=[
{"role": "user", "content": prompt}
],
)
print(response.content[0].text)
This pattern keeps answers grounded in your internal policy docs while still letting Claude produce readable output.
5. Wrap retrieval + generation into one agent function
In practice, you want one function that does both steps so your app can call it from an API endpoint or workflow engine.
import os
import psycopg
from anthropic import Anthropic
from pgvector.psycopg import register_vector
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
def embed_text(text: str) -> list[float]:
raise NotImplementedError
def answer_lending_question(question: str) -> str:
conn = psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending")
register_vector(conn)
q_emb = embed_text(question)
with conn.cursor() as cur:
cur.execute(
"""
SELECT source, chunk
FROM lending_chunks
ORDER BY embedding <-> %s
LIMIT 5;
""",
(q_emb,),
)
rows = cur.fetchall()
context = "\n".join(f"[{source}] {chunk}" for source, chunk in rows)
resp = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=250,
messages=[{
"role": "user",
"content": f"Answer using only this context:\n\n{context}\n\nQuestion: {question}"
}],
)
return resp.content[0].text.strip()
Testing the Integration
Run a simple end-to-end test with one known policy chunk and one borrower question.
def test_answer_lending_question():
question = "What documents does a self-employed applicant need?"
answer = answer_lending_question(question)
print(answer)
test_answer_lending_question()
Expected output:
A self-employed applicant needs tax returns for the last two years.
If required by policy, they may also need recent bank statements and additional income verification.
If you get irrelevant answers:
- •check your embedding dimension matches the table definition
- •verify retrieved chunks are actually related to the query
- •make sure the prompt says to use only retrieved context
Real-World Use Cases
- •
Loan application copilot
- •Retrieve borrower history, policy snippets, and product terms from pgvector.
- •Use Claude to draft next-step guidance for underwriters or support agents.
- •
Policy Q&A assistant
- •Index internal lending manuals and compliance docs.
- •Let staff ask natural-language questions like “What’s our DTI threshold for unsecured personal loans?”
- •
Borrower document triage
- •Store OCR’d intake documents as embeddings.
- •Classify missing items and generate follow-up requests based on retrieved examples.
The production pattern here is stable: Postgres stores truth, pgvector finds relevant evidence, and Anthropic turns that evidence into usable decisions or responses. That’s enough to ship an AI agent for lending without building brittle prompt-only workflows.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit