How to Integrate Anthropic for pension funds with pgvector for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
anthropic-for-pension-fundspgvectormulti-agent-systems

Why this integration matters

If you’re building AI agents for pension operations, you need two things: a model that can reason over policy-heavy workflows, and a retrieval layer that can pull the right plan documents, member records, and investment notes at the right time. Anthropic gives you the reasoning layer; pgvector gives you fast semantic search over pension knowledge so multi-agent systems can answer with context instead of guessing.

The useful pattern here is simple: one agent classifies the request, another retrieves relevant pension documents from pgvector, and Anthropic turns that evidence into a grounded response or action plan.

Prerequisites

  • Python 3.10+
  • PostgreSQL 14+ with the pgvector extension installed
  • An Anthropic API key
  • A database user with permission to create tables and extensions
  • These Python packages:
    • anthropic
    • psycopg[binary]
    • pgvector
    • openai or another embedding provider if you want embeddings for your vectors
  • A local or hosted pension knowledge base to index:
    • plan rules
    • contribution policies
    • member FAQs
    • investment committee notes

Install dependencies:

pip install anthropic psycopg[binary] pgvector openai

Integration Steps

  1. Set up PostgreSQL with pgvector and create your schema.
import os
import psycopg
from pgvector.psycopg import register_vector

DB_URL = os.environ["DATABASE_URL"]

with psycopg.connect(DB_URL) as conn:
    conn.execute("CREATE EXTENSION IF NOT EXISTS vector;")
    conn.execute("""
        CREATE TABLE IF NOT EXISTS pension_docs (
            id SERIAL PRIMARY KEY,
            doc_type TEXT NOT NULL,
            title TEXT NOT NULL,
            content TEXT NOT NULL,
            embedding vector(1536)
        );
    """)
    conn.commit()

print("pgvector schema ready")
  1. Generate embeddings for pension documents and store them in pgvector.
import os
import psycopg
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
DB_URL = os.environ["DATABASE_URL"]

docs = [
    {
        "doc_type": "policy",
        "title": "Early Retirement Rule",
        "content": "Members may retire early at age 55 with reduced benefits subject to plan approval."
    },
    {
        "doc_type": "faq",
        "title": "Contribution Changes",
        "content": "Employer contributions are reviewed quarterly by the trustee board."
    }
]

def embed(text: str):
    resp = client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return resp.data[0].embedding

with psycopg.connect(DB_URL) as conn:
    for doc in docs:
        vector = embed(doc["content"])
        conn.execute(
            """
            INSERT INTO pension_docs (doc_type, title, content, embedding)
            VALUES (%s, %s, %s, %s)
            """,
            (doc["doc_type"], doc["title"], doc["content"], vector),
        )
    conn.commit()

print("documents indexed")
  1. Retrieve relevant context from pgvector for a pension query.
import os
import psycopg
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
DB_URL = os.environ["DATABASE_URL"]

def embed(text: str):
    resp = client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return resp.data[0].embedding

query = "Can a member retire at 55?"
query_vector = embed(query)

with psycopg.connect(DB_URL) as conn:
    rows = conn.execute(
        """
        SELECT title, content
        FROM pension_docs
        ORDER BY embedding <=> %s
        LIMIT 3;
        """,
        (query_vector,),
    ).fetchall()

for row in rows:
    print(row)
  1. Call Anthropic with retrieved context to produce a grounded answer.
import os
from anthropic import Anthropic

anthropic_client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

context = """
Title: Early Retirement Rule
Content: Members may retire early at age 55 with reduced benefits subject to plan approval.
"""

message = anthropic_client.messages.create(
    model="claude-3-5-sonnet-latest",
    max_tokens=300,
    temperature=0,
    messages=[
        {
            "role": "user",
            "content": f"""
You are a pension operations assistant.
Answer only using the provided context.

Context:
{context}

Question:
Can a member retire at 55?
"""
        }
    ],
)

print(message.content[0].text)
  1. Wire both into a simple multi-agent orchestration flow.
import os
import psycopg
from anthropic import Anthropic
from openai import OpenAI

db_url = os.environ["DATABASE_URL"]
anthropic_client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
embed_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def embed(text: str):
    resp = embed_client.embeddings.create(
        model="text-embedding-3-small",
        input=text,
    )
    return resp.data[0].embedding

def retrieve_context(question: str) -> str:
    qvec = embed(question)
    with psycopg.connect(db_url) as conn:
        rows = conn.execute(
            """
            SELECT title, content
            FROM pension_docs
            ORDER BY embedding <=> %s
            LIMIT 3;
            """,
            (qvec,),
        ).fetchall()
    return "\n\n".join([f"Title: {t}\nContent: {c}" for t, c in rows])

def answer_question(question: str) -> str:
    context = retrieve_context(question)
    response = anthropic_client.messages.create(
        model="claude-3-5-sonnet-latest",
        max_tokens=400,
        temperature=0,
        messages=[
            {
                "role": "user",
                "content": f"""
You are an agent in a pension support system.
Use only the context below.

Context:
{context}

Question:
{question}
"""
            }
        ],
    )
    return response.content[0].text

print(answer_question("What happens if a member wants to retire early?"))

Testing the Integration

Run a query that should clearly match one of your indexed documents.

result = answer_question("Is retirement allowed at age 55?")
print(result)

Expected output:

Members may retire early at age 55 with reduced benefits subject to plan approval.

If you get an empty or vague answer, check these first:

  • Your embeddings dimension matches the vector(1536) column size.
  • The table actually contains indexed documents.
  • The retrieval query uses <=> for cosine distance.
  • Your prompt tells Anthropic to answer only from retrieved context.

Real-World Use Cases

  • Member service copilot
    • Answer benefit questions by retrieving plan rules from pgvector and drafting responses with Anthropic.
  • Trustee meeting assistant
    • Pull prior meeting notes, policy docs, and investment memos into context before generating summaries or action items.
  • Compliance triage agent
    • Classify requests like hardship withdrawal, retirement eligibility, or contribution exceptions, then route them with supporting evidence.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides