How to Integrate Anthropic for pension funds with pgvector for production AI

By Cyprian AaronsUpdated 2026-04-21
anthropic-for-pension-fundspgvectorproduction-ai

Anthropic gives you the reasoning layer for policy-heavy pension workflows. pgvector gives you durable semantic retrieval over plan documents, member communications, actuarial notes, and historical case handling. Combined, they let you build an AI agent that answers pension questions with grounded context instead of guessing.

Prerequisites

  • Python 3.10+
  • PostgreSQL 14+ with the pgvector extension installed
  • An Anthropic API key
  • Access to your pension fund document corpus
  • These Python packages:
    • anthropic
    • psycopg[binary]
    • pgvector
    • python-dotenv

Install them:

pip install anthropic psycopg[binary] pgvector python-dotenv

Create the vector extension in your database:

CREATE EXTENSION IF NOT EXISTS vector;

Integration Steps

  1. Set up your environment and database connection

    Keep secrets out of code. Load your Anthropic key and connect to Postgres with a normal application role.

import os
from dotenv import load_dotenv
import psycopg

load_dotenv()

DB_DSN = os.environ["DATABASE_URL"]
ANTHROPIC_API_KEY = os.environ["ANTHROPIC_API_KEY"]

conn = psycopg.connect(DB_DSN)
conn.autocommit = True

with conn.cursor() as cur:
    cur.execute("CREATE EXTENSION IF NOT EXISTS vector;")
    cur.execute("""
        CREATE TABLE IF NOT EXISTS pension_docs (
            id bigserial PRIMARY KEY,
            doc_id text NOT NULL,
            title text NOT NULL,
            content text NOT NULL,
            embedding vector(1536)
        )
    """)
  1. Generate embeddings with Anthropic-compatible workflow and store them in pgvector

    Anthropic’s models are used for generation here; for embeddings, use a dedicated embedding model from your stack. In production, keep generation and retrieval separate so your agent stays auditable.

from openai import OpenAI  # use your embedding provider of choice

embed_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def embed_text(text: str) -> list[float]:
    resp = embed_client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return resp.data[0].embedding

docs = [
    {
        "doc_id": "policy-001",
        "title": "Retirement Eligibility",
        "content": "Members may retire at age 60 with 10 years of service..."
    },
    {
        "doc_id": "policy-002",
        "title": "Early Withdrawal Rules",
        "content": "Early withdrawal is subject to tax withholding and trustee approval..."
    }
]

with conn.cursor() as cur:
    for doc in docs:
        emb = embed_text(doc["content"])
        cur.execute(
            """
            INSERT INTO pension_docs (doc_id, title, content, embedding)
            VALUES (%s, %s, %s, %s)
            """,
            (doc["doc_id"], doc["title"], doc["content"], emb)
        )
  1. Retrieve the most relevant pension context from pgvector

    Use cosine distance in Postgres to pull the top matches for a user query.

def search_docs(query: str, limit: int = 3):
    q_emb = embed_text(query)

    with conn.cursor() as cur:
        cur.execute(
            """
            SELECT doc_id, title, content
            FROM pension_docs
            ORDER BY embedding <=> %s::vector
            LIMIT %s
            """,
            (q_emb, limit)
        )
        return cur.fetchall()

matches = search_docs("Can a member retire early at 58?")
for row in matches:
    print(row)
  1. Call Anthropic with retrieved context to generate the answer

    This is the production pattern: retrieve first, then ask Claude to answer only from that context.

from anthropic import Anthropic

client = Anthropic(api_key=ANTHROPIC_API_KEY)

def answer_pension_question(question: str) -> str:
    matches = search_docs(question)

    context_block = "\n\n".join(
        f"[{doc_id}] {title}\n{content}"
        for doc_id, title, content in matches
    )

    message = client.messages.create(
        model="claude-3-5-sonnet-latest",
        max_tokens=400,
        temperature=0,
        messages=[
            {
                "role": "user",
                "content": f"""
You are a pension operations assistant.
Answer only using the provided context.
If the context is insufficient, say so clearly.

Question: {question}

Context:
{context_block}
"""
            }
        ]
    )

    return message.content[0].text

print(answer_pension_question("Can a member retire early at 58?"))
  1. Wrap it into an agent endpoint

    Expose one function that your app or workflow engine can call. That keeps retrieval and generation behind a single contract.

from fastapi import FastAPI

app = FastAPI()

@app.post("/ask")
def ask(payload: dict):
    question = payload["question"]
    answer = answer_pension_question(question)
    return {"question": question, "answer": answer}

Testing the Integration

Run a direct smoke test against one known policy question.

test_question = "What is required for retirement eligibility?"
result = answer_pension_question(test_question)

print("QUESTION:", test_question)
print("ANSWER:", result)

Expected output:

QUESTION: What is required for retirement eligibility?
ANSWER: Members may retire at age 60 with 10 years of service...

If you get a generic answer without policy details, check these first:

  • Your embeddings were stored correctly in vector(1536)
  • The similarity query returns relevant rows
  • The prompt instructs Claude to use only retrieved context
  • Your document chunks are small enough to retrieve precisely

Real-World Use Cases

  • Member support assistant that answers benefit questions from approved plan documents and internal SOPs.
  • Claims triage agent that retrieves prior cases and policy language before drafting next-step recommendations.
  • Compliance review helper that summarizes regulatory references and flags missing evidence before human review.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides