How to Integrate FastAPI for investment banking with PostgreSQL for AI agents
Why this integration matters
If you’re building AI agents for investment banking, FastAPI gives you the HTTP layer for exposing workflows, approvals, and model-backed actions. PostgreSQL gives you durable state: trades, client profiles, audit logs, prompt traces, and agent memory.
Put them together and you can build agents that don’t just answer questions — they can retrieve live banking data, persist decisions, and operate with traceability.
Prerequisites
- •Python 3.10+
- •FastAPI installed
- •Uvicorn installed
- •PostgreSQL 14+
- •
psycopgorpsycopg2-binary - •A running PostgreSQL database with credentials
- •An AI agent runtime or service that can call your API endpoints
- •Basic understanding of REST APIs and SQL
- •Environment variables configured for database access:
- •
POSTGRES_HOST - •
POSTGRES_PORT - •
POSTGRES_DB - •
POSTGRES_USER - •
POSTGRES_PASSWORD
- •
Install the packages:
pip install fastapi uvicorn psycopg[binary] pydantic
Integration Steps
1) Create the FastAPI app and database connection
Start by wiring FastAPI to PostgreSQL using a connection pool. For investment banking workloads, keep the connection lifecycle explicit so your agent endpoints stay predictable under load.
import os
from contextlib import asynccontextmanager
from fastapi import FastAPI
import psycopg
from psycopg_pool import ConnectionPool
DATABASE_URL = (
f"postgresql://{os.getenv('POSTGRES_USER')}:{os.getenv('POSTGRES_PASSWORD')}"
f"@{os.getenv('POSTGRES_HOST')}:{os.getenv('POSTGRES_PORT')}/{os.getenv('POSTGRES_DB')}"
)
pool = ConnectionPool(conninfo=DATABASE_URL, min_size=1, max_size=10)
@asynccontextmanager
async def lifespan(app: FastAPI):
pool.open()
yield
pool.close()
app = FastAPI(lifespan=lifespan)
This pattern is better than opening a new DB connection per request. Banking agents often spike on market open or during batch reconciliation windows, and pooling keeps latency stable.
2) Define your banking data model in PostgreSQL
For an AI agent system, store structured records that the agent can query and update. A common starting point is an orders table plus an audit trail.
def init_db():
with pool.connection() as conn:
with conn.cursor() as cur:
cur.execute("""
CREATE TABLE IF NOT EXISTS trade_orders (
id SERIAL PRIMARY KEY,
client_id TEXT NOT NULL,
symbol TEXT NOT NULL,
side TEXT NOT NULL CHECK (side IN ('buy', 'sell')),
quantity INTEGER NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
""")
cur.execute("""
CREATE TABLE IF NOT EXISTS agent_audit_log (
id SERIAL PRIMARY KEY,
action TEXT NOT NULL,
payload JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
""")
conn.commit()
init_db()
Use JSONB for agent traces because banking workflows change. You’ll want to store raw tool calls, validation results, and approval context without constantly migrating columns.
3) Expose a FastAPI endpoint for agent-driven order creation
This is where the AI agent talks to your backend. The agent can call this endpoint after validating intent, compliance rules, and required fields.
from pydantic import BaseModel, Field
from fastapi import HTTPException
class TradeOrderIn(BaseModel):
client_id: str = Field(..., examples=["C12345"])
symbol: str = Field(..., examples=["AAPL"])
side: str = Field(..., pattern="^(buy|sell)$")
quantity: int = Field(..., gt=0)
@app.post("/orders")
def create_order(order: TradeOrderIn):
with pool.connection() as conn:
with conn.cursor() as cur:
cur.execute(
"""
INSERT INTO trade_orders (client_id, symbol, side, quantity)
VALUES (%s, %s, %s, %s)
RETURNING id;
""",
(order.client_id, order.symbol, order.side, order.quantity),
)
order_id = cur.fetchone()[0]
cur.execute(
"""
INSERT INTO agent_audit_log (action, payload)
VALUES (%s, %s::jsonb);
""",
("create_order", order.model_dump_json()),
)
conn.commit()
return {"order_id": order_id, "status": "pending"}
For investment banking systems, this endpoint should be the narrow waist of your architecture. Let the agent propose actions; let FastAPI validate them; let PostgreSQL persist them.
4) Add a read endpoint for retrieval-augmented agent context
Your AI agent needs live context before it acts. Give it a read API that returns recent orders or client activity from PostgreSQL.
from typing import List
class TradeOrderOut(BaseModel):
id: int
client_id: str
symbol: str
side: str
quantity: int
status: str
@app.get("/orders/{client_id}", response_model=List[TradeOrderOut])
def list_orders(client_id: str):
with pool.connection() as conn:
with conn.cursor(row_factory=psycopg.rows.dict_row) as cur:
cur.execute(
"""
SELECT id, client_id, symbol, side, quantity, status
FROM trade_orders
WHERE client_id = %s
ORDER BY created_at DESC;
""",
(client_id,),
)
rows = cur.fetchall()
return rows
This is the endpoint your agent uses before generating recommendations. In practice, you’d combine this with compliance checks or policy filters before any execution path.
5) Connect the agent loop to FastAPI via HTTP calls
Now wire an external AI agent to your API using standard HTTP calls. The agent can fetch context first and then submit a trade request if the policy allows it.
import requests
BASE_URL = "http://localhost:8000"
def get_client_context(client_id: str):
resp = requests.get(f"{BASE_URL}/orders/{client_id}", timeout=10)
resp.raise_for_status()
return resp.json()
def submit_trade():
payload = {
"client_id": "C12345",
"symbol": "MSFT",
"side": "buy",
"quantity": 100,
}
resp = requests.post(f"{BASE_URL}/orders", json=payload, timeout=10)
resp.raise_for_status()
return resp.json()
print(get_client_context("C12345"))
print(submit_trade())
That’s the basic control loop. In production you’d usually place policy checks between retrieval and submission so the model never directly bypasses risk controls.
Testing the Integration
Use FastAPI’s test client to verify both API behavior and database persistence.
from fastapi.testclient import TestClient
client = TestClient(app)
def test_create_and_list_order():
create_resp = client.post("/orders", json={
"client_id": "C99999",
"symbol": "JPM",
"side": "buy",
"quantity": 50,
})
assert create_resp.status_code == 200
list_resp = client.get("/orders/C99999")
assert list_resp.status_code == 200
test_create_and_list_order()
print("Integration test passed")
Expected output:
Integration test passed
If you want a stronger check during development:
resp = client.post("/orders", json={
"client_id": "C11111",
"symbol": "GS",
"side": "sell",
"quantity": 25,
})
print(resp.json())
Expected output:
{"order_id": 1,"status":"pending"}
Real-World Use Cases
- •
Trade workflow orchestration
- •AI agents collect intent from bankers or clients.
- •FastAPI validates requests.
- •PostgreSQL stores orders, approvals, and audit history.
- •
Client portfolio intelligence
- •Agents query historical positions and recent activity.
- •The API returns normalized data for downstream analysis.
- •PostgreSQL keeps a durable record for reporting and compliance.
- •
Compliance-aware decision support
- •Agents retrieve policy-relevant context before recommending actions.
- •Every recommendation is logged in PostgreSQL.
- •Risk teams get traceability across all automated steps.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit