How to Integrate Anthropic for lending with pgvector for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
anthropic-for-lendingpgvectormulti-agent-systems

Combining Anthropic for lending with pgvector gives you a clean pattern for loan workflows that need both reasoning and retrieval. Anthropic handles the policy-heavy parts of lending decisions, while pgvector stores borrower history, document embeddings, and agent memory so multiple agents can coordinate on the same case.

This is the right setup when you need one agent to summarize financial documents, another to check policy constraints, and a third to pull similar historical cases from vector search before making a recommendation.

Prerequisites

  • Python 3.10+
  • PostgreSQL 14+ with the pgvector extension installed
  • An Anthropic API key
  • A database user with permission to create tables and extensions
  • pip packages:
    • anthropic
    • psycopg[binary]
    • pgvector
    • sqlalchemy
  • A lending dataset or sample loan application documents
  • A clear multi-agent design:
    • retrieval agent
    • underwriting agent
    • compliance agent

Install the dependencies:

pip install anthropic psycopg[binary] pgvector sqlalchemy

Integration Steps

  1. Set up PostgreSQL and pgvector

Create the extension and a table that can store embeddings for loan documents, borrower notes, or prior case summaries.

import psycopg

conn = psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending")
conn.execute("CREATE EXTENSION IF NOT EXISTS vector;")

conn.execute("""
CREATE TABLE IF NOT EXISTS loan_memory (
    id SERIAL PRIMARY KEY,
    case_id TEXT NOT NULL,
    content TEXT NOT NULL,
    embedding VECTOR(1536)
);
""")

conn.commit()
conn.close()

If you are using Claude embeddings from another pipeline, keep the embedding dimension consistent with your model output. The table above uses 1536 as an example shape for stored vectors.

  1. Initialize Anthropic for lending workflows

Use Anthropic to generate underwriting summaries, policy checks, or structured recommendations from retrieved context.

import os
from anthropic import Anthropic

client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

loan_context = """
Applicant: Jane Doe
Income: $120,000
Debt-to-income ratio: 31%
Requested amount: $280,000
Property type: primary residence
"""

response = client.messages.create(
    model="claude-3-5-sonnet-latest",
    max_tokens=400,
    temperature=0,
    messages=[
        {
            "role": "user",
            "content": f"""
You are a lending analyst.
Review this application and return:
1. risk summary
2. approval recommendation
3. key missing data

{loan_context}
"""
        }
    ]
)

print(response.content[0].text)

This is the part your underwriting or compliance agent calls after retrieval has assembled relevant evidence.

  1. Generate embeddings and store them in pgvector

In a production system, embeddings usually come from a dedicated embedding model or service. Store them in PostgreSQL so your agents can retrieve similar cases during decision-making.

import psycopg

def store_case(case_id: str, content: str, embedding: list[float]):
    with psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending") as conn:
        with conn.cursor() as cur:
            cur.execute(
                """
                INSERT INTO loan_memory (case_id, content, embedding)
                VALUES (%s, %s, %s)
                """,
                (case_id, content, embedding),
            )
        conn.commit()

sample_embedding = [0.01] * 1536
store_case(
    "case_1001",
    "Approved mortgage application with stable income and low DTI.",
    sample_embedding,
)

For real usage, replace sample_embedding with vectors generated from your embedding pipeline before inserting into loan_memory.

  1. Retrieve similar lending cases for the agent

Use pgvector similarity search to fetch prior cases that match the current application. This gives your multi-agent system memory instead of stateless prompts.

import psycopg

def find_similar_cases(query_embedding: list[float], limit: int = 3):
    with psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending") as conn:
        with conn.cursor() as cur:
            cur.execute(
                """
                SELECT case_id, content
                FROM loan_memory
                ORDER BY embedding <-> %s::vector
                LIMIT %s;
                """,
                (query_embedding, limit),
            )
            return cur.fetchall()

matches = find_similar_cases([0.01] * 1536)

for case_id, content in matches:
    print(case_id, content)

The <-> operator is the standard pgvector distance operator used for nearest-neighbor search.

  1. Wire retrieval into Anthropic’s decision prompt

Now combine both systems. Retrieve similar cases from pgvector first, then pass them into Anthropic as evidence for the lending agent.

import os
import psycopg
from anthropic import Anthropic

client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

def get_case_context(query_embedding: list[float]) -> str:
    with psycopg.connect("postgresql://postgres:postgres@localhost:5432/lending") as conn:
        with conn.cursor() as cur:
            cur.execute(
                """
                SELECT case_id, content
                FROM loan_memory
                ORDER BY embedding <-> %s::vector
                LIMIT 5;
                """,
                (query_embedding,),
            )
            rows = cur.fetchall()

    return "\n".join([f"- {case_id}: {content}" for case_id, content in rows])

current_application = """
Applicant has stable employment but recent credit utilization increased.
Requested amount is $180,000 for refinance.
"""

retrieved_cases = get_case_context([0.01] * 1536)

result = client.messages.create(
    model="claude-3-5-sonnet-latest",
    max_tokens=500,
    temperature=0,
    messages=[
        {
            "role": "user",
            "content": f"""
You are one agent in a multi-agent lending system.
Use these similar cases as reference:

{retrieved_cases}

Current application:
{current_application}

Return JSON with fields:
- decision
- rationale
- risk_flags
- follow_up_questions
"""
        }
    ]
)

print(result.content[0].text)

That pattern is what you want in production:

  • pgvector provides grounded retrieval across prior loans and policy notes.
  • Anthropic turns that retrieved context into a structured lending recommendation.
  • Other agents can reuse the same vector store for compliance checks or fraud signals.

Testing the Integration

Run an end-to-end test by inserting one known case into pgvector and asking Anthropic to reason over it.

test_query_embedding = [0.01] * 1536

similar_cases = find_similar_cases(test_query_embedding)
assert len(similar_cases) > 0

prompt = f"""
You are validating a lending assistant.

Retrieved cases:
{similar_cases}

Return 'PASS' if the retrieved context looks usable.
"""

resp = client.messages.create(
    model="claude-3-5-sonnet-latest",
    max_tokens=50,
    temperature=0,
    messages=[{"role": "user", "content": prompt}]
)

print(resp.content[0].text)

Expected output:

PASS

If you get no rows back from pgvector, check your embeddings length and confirm the vector extension is enabled. If Anthropic returns free-form text when you expected structured output, tighten the prompt and validate response formatting in your application layer.

Real-World Use Cases

  • Loan officer copilot

    • Retrieve similar approved/declined applications from pgvector.
    • Ask Anthropic to draft a recommendation memo for human review.
  • Compliance review agent

    • Store policy clauses and prior exceptions in pgvector.
    • Use Anthropic to compare a new application against internal lending rules.
  • Multi-agent underwriting workflow

    • One agent extracts borrower facts.
    • One agent retrieves comparable cases.
    • One agent calls Anthropic to produce final risk reasoning and follow-up questions.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides