How to Integrate FastAPI for wealth management with PostgreSQL for production AI

By Cyprian AaronsUpdated 2026-04-21
fastapi-for-wealth-managementpostgresqlproduction-ai

FastAPI gives you the API layer for wealth-management workflows, while PostgreSQL gives you the durable state you need for production AI agents. Put them together and you can build systems that retrieve client portfolios, persist model outputs, track advisor actions, and keep audit trails that compliance teams can actually use.

Prerequisites

  • Python 3.11+
  • FastAPI installed and running in your project
  • PostgreSQL 14+ running locally or in your VPC
  • A PostgreSQL database and user with write access
  • psycopg or asyncpg installed for database access
  • uvicorn for serving the FastAPI app
  • Basic understanding of REST endpoints and SQL transactions

Integration Steps

  1. Install the dependencies

    Use FastAPI for your API layer and PostgreSQL for persistence. For production, I prefer async I/O so the agent can handle concurrent portfolio requests without blocking.

    pip install fastapi uvicorn psycopg[binary,pool] pydantic
    
  2. Create a PostgreSQL connection pool

    Don’t open a new DB connection per request. Use a pool and keep the connection logic isolated in one module.

    # db.py
    from psycopg_pool import ConnectionPool
    
    DATABASE_URL = "postgresql://wealth_user:secret@localhost:5432/wealth_ai"
    
    pool = ConnectionPool(
        conninfo=DATABASE_URL,
        min_size=1,
        max_size=10,
        kwargs={"autocommit": False},
    )
    
    def get_conn():
        return pool.connection()
    
  3. Define your FastAPI models and endpoint

    In wealth management, the common pattern is: receive a client request, validate it with Pydantic, persist it to Postgres, then return an agent-ready response.

    # main.py
    from fastapi import FastAPI, HTTPException
    from pydantic import BaseModel, Field
    from db import get_conn
    
    app = FastAPI(title="Wealth AI API")
    
    class PortfolioRequest(BaseModel):
        client_id: str = Field(..., min_length=1)
        risk_profile: str = Field(..., examples=["conservative", "balanced", "aggressive"])
        amount_usd: float = Field(..., gt=0)
    
    class PortfolioResponse(BaseModel):
        request_id: int
        status: str
    
    @app.post("/portfolio/recommendations", response_model=PortfolioResponse)
    def create_recommendation(payload: PortfolioRequest):
        try:
            with get_conn() as conn:
                with conn.cursor() as cur:
                    cur.execute(
                        """
                        INSERT INTO portfolio_requests (client_id, risk_profile, amount_usd)
                        VALUES (%s, %s, %s)
                        RETURNING id;
                        """,
                        (payload.client_id, payload.risk_profile, payload.amount_usd),
                    )
                    request_id = cur.fetchone()[0]
                    conn.commit()
            return PortfolioResponse(request_id=request_id, status="queued")
        except Exception as e:
            raise HTTPException(status_code=500, detail=str(e))
    
  4. Create the PostgreSQL schema

    Keep the schema simple and explicit. For production AI systems, you want tables that support traceability: inputs in one table, model outputs in another, and timestamps everywhere.

    CREATE TABLE IF NOT EXISTS portfolio_requests (
        id SERIAL PRIMARY KEY,
        client_id TEXT NOT NULL,
        risk_profile TEXT NOT NULL,
        amount_usd NUMERIC(14,2) NOT NULL,
        created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    );
    
    CREATE TABLE IF NOT EXISTS ai_recommendations (
        id SERIAL PRIMARY KEY,
        request_id INTEGER NOT NULL REFERENCES portfolio_requests(id),
        recommendation JSONB NOT NULL,
        model_name TEXT NOT NULL,
        created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    );
    
  5. Store AI output back into PostgreSQL

    This is where the integration becomes useful for production AI. The API accepts a wealth-management request; your agent generates an allocation; Postgres stores the result for audit and retrieval.

    # agent_store.py
    import json
    from db import get_conn
    
    def save_recommendation(request_id: int, recommendation: dict, model_name: str):
        with get_conn() as conn:
            with conn.cursor() as cur:
                cur.execute(
                    """
                    INSERT INTO ai_recommendations (request_id, recommendation, model_name)
                    VALUES (%s, %s::jsonb, %s)
                    RETURNING id;
                    """,
                    (request_id, json.dumps(recommendation), model_name),
                )
                rec_id = cur.fetchone()[0]
                conn.commit()
                return rec_id
    

Testing the Integration

Run a quick end-to-end test by inserting a request through FastAPI and checking that Postgres persisted it.

# test_integration.py
from main import create_recommendation, PortfolioRequest

payload = PortfolioRequest(
    client_id="CUST-10021",
    risk_profile="balanced",
    amount_usd=250000,
)

result = create_recommendation(payload)
print(result.model_dump())

Expected output:

{'request_id': 1, 'status': 'queued'}

If you want to verify directly in PostgreSQL:

SELECT id, client_id, risk_profile, amount_usd, created_at
FROM portfolio_requests
ORDER BY id DESC
LIMIT 1;

You should see the inserted client request row with a timestamp.

Real-World Use Cases

  • Advisor recommendation pipeline

    • FastAPI receives suitability inputs.
    • PostgreSQL stores client context and generated recommendations.
    • An AI agent reads historical recommendations to improve future allocations.
  • Compliance-friendly audit logging

    • Persist every prompt input, model output, and human override.
    • Query by client ID or date range during reviews.
    • Keep immutable records for internal controls.
  • Portfolio monitoring workflows

    • Expose endpoints for rebalancing triggers.
    • Store thresholds and alert states in Postgres.
    • Let an agent decide when to notify advisors or clients based on account drift.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides