How to Integrate Anthropic for fintech with pgvector for production AI
Combining Anthropic for fintech with pgvector gives you a clean production pattern for retrieval-augmented agents: the model handles reasoning, while pgvector stores and retrieves your institution’s internal knowledge, policies, and case history. That means you can build assistants that answer customer-service questions, summarize policy documents, triage fraud cases, or draft compliance-safe responses using your own data instead of relying on prompt stuffing.
Prerequisites
- •Python 3.10+
- •A PostgreSQL 14+ database
- •
pgvectorextension installed in PostgreSQL - •An Anthropic API key
- •A working network path from your app to Postgres and Anthropic
- •Python packages:
- •
anthropic - •
psycopg[binary] - •
pgvector - •
python-dotenv
- •
Install them:
pip install anthropic psycopg[binary] pgvector python-dotenv
Integration Steps
- •
Set up PostgreSQL with pgvector
Create the extension and a table for embeddings. Keep the schema simple and explicit so you can index it later.
import psycopg
conn = psycopg.connect("postgresql://app_user:password@localhost:5432/fintech_ai")
conn.execute("CREATE EXTENSION IF NOT EXISTS vector;")
conn.execute("""
CREATE TABLE IF NOT EXISTS knowledge_chunks (
id BIGSERIAL PRIMARY KEY,
source TEXT NOT NULL,
content TEXT NOT NULL,
embedding VECTOR(1536)
);
""")
conn.commit()
conn.close()
- •
Generate embeddings with Anthropic-compatible text workflow
For production, use a dedicated embedding model from your stack if available. If you are standardizing around Anthropic for orchestration, keep the embedding generation step isolated so retrieval stays deterministic.
In practice, your agent flow looks like this:
- •chunk document
- •embed chunk
- •store vector in Postgres
- •retrieve top matches on query
Here’s the storage side using
pgvectorin Python:
from pgvector.psycopg import register_vector
import psycopg
conn = psycopg.connect("postgresql://app_user:password@localhost:5432/fintech_ai")
register_vector(conn)
sample_chunks = [
("policy_001", "Wire transfers above $10,000 require enhanced due diligence."),
("policy_002", "Card disputes must be acknowledged within 24 hours."),
]
# Replace this with your embedding pipeline output.
sample_embeddings = [
[0.01] * 1536,
[0.02] * 1536,
]
with conn.cursor() as cur:
for (source, content), embedding in zip(sample_chunks, sample_embeddings):
cur.execute(
"INSERT INTO knowledge_chunks (source, content, embedding) VALUES (%s, %s, %s)",
(source, content, embedding),
)
conn.commit()
conn.close()
- •
Query pgvector for relevant context
Use cosine distance to fetch the most relevant chunks for a user question. This is the retrieval layer that keeps your answers grounded in internal data.
import psycopg
from pgvector.psycopg import register_vector
def retrieve_context(query_embedding, limit=3):
conn = psycopg.connect("postgresql://app_user:password@localhost:5432/fintech_ai")
register_vector(conn)
sql = """
SELECT source, content
FROM knowledge_chunks
ORDER BY embedding <=> %s::vector
LIMIT %s;
"""
with conn.cursor() as cur:
cur.execute(sql, (query_embedding, limit))
rows = cur.fetchall()
conn.close()
return rows
# Example query vector placeholder.
query_embedding = [0.015] * 1536
matches = retrieve_context(query_embedding)
for source, content in matches:
print(source, content)
- •
Call Anthropic with retrieved context
Feed the retrieved chunks into Anthropic’s Messages API. Keep the prompt tight: system instructions define behavior; retrieved context provides facts.
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
context_block = "\n".join(
f"[{source}] {content}" for source, content in matches
)
response = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=300,
temperature=0,
system=(
"You are a fintech operations assistant. "
"Answer only using provided context when possible. "
"If the context is insufficient, say so clearly."
),
messages=[
{
"role": "user",
"content": f"""
Question: What is the policy for wire transfers above $10,000?
Context:
{context_block}
""",
}
],
)
print(response.content[0].text)
- •
Wrap retrieval + generation into one production function
This is the unit you expose to your agent runtime or API service.
def answer_fintech_question(question: str) -> str:
# Step 1: embed question using your embedding pipeline.
# Replace this placeholder with real embeddings.
question_embedding = [0.015] * 1536
# Step 2: retrieve top matches from pgvector.
matches = retrieve_context(question_embedding)
# Step 3: call Anthropic with grounded context.
context_block = "\n".join(f"[{src}] {txt}" for src, txt in matches)
response = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=250,
temperature=0,
system="You are a compliant fintech assistant.",
messages=[
{
"role": "user",
"content": f"Question: {question}\n\nContext:\n{context_block}",
}
],
)
return response.content[0].text
print(answer_fintech_question("What happens when a wire transfer exceeds $10k?"))
Testing the Integration
Run an end-to-end test that inserts one known policy chunk and asks a matching question.
test_question = "What is required for wire transfers above $10,000?"
answer = answer_fintech_question(test_question)
print(answer)
Expected output:
Wire transfers above $10,000 require enhanced due diligence.
If you see a generic answer with no policy detail:
- •your retrieval query is not returning relevant rows
- •your embeddings are too weak or mismatched to stored vectors
- •your prompt is not passing context into
messages.create()
Real-World Use Cases
- •
Compliance copilot
- •Answer internal policy questions from approved documents stored in pgvector.
- •Use Anthropic to summarize and explain policies without exposing raw document dumps.
- •
Fraud ops assistant
- •Retrieve prior fraud cases similar to the current alert.
- •Have Anthropic draft analyst notes and next-step recommendations based on those cases.
- •
Customer support agent
- •Pull product terms, fee schedules, and dispute rules from Postgres.
- •Generate consistent responses that stay aligned with current bank policy.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit