How to Integrate FastAPI for lending with PostgreSQL for production AI
FastAPI for lending gives you the API surface to expose credit workflows, while PostgreSQL gives you durable state for applications, decisions, audit trails, and model inputs. In a production AI agent system, that combination lets you move from stateless inference to real lending operations: intake a borrower request, score it, persist the decision context, and return a traceable response.
Prerequisites
- •Python 3.11+
- •A running PostgreSQL 14+ instance
- •A FastAPI application already scaffolded
- •
psycopgorasyncpginstalled for PostgreSQL access - •
uvicornfor local API execution - •Environment variables configured:
- •
DATABASE_URL=postgresql://user:password@localhost:5432/lending - •
APP_ENV=production
- •
- •A lending domain schema ready to store:
- •applicants
- •loan applications
- •AI scores
- •decision logs
Integration Steps
- •
Create the FastAPI app and PostgreSQL connection layer
Start with a clean app module and an async PostgreSQL pool. For production AI systems, use async I/O end-to-end so your agent can handle concurrent requests without blocking on database calls.
from fastapi import FastAPI import os import asyncpg app = FastAPI(title="Lending API") DATABASE_URL = os.environ["DATABASE_URL"] db_pool: asyncpg.Pool | None = None @app.on_event("startup") async def startup(): global db_pool db_pool = await asyncpg.create_pool(DATABASE_URL, min_size=2, max_size=10) @app.on_event("shutdown") async def shutdown(): await db_pool.close() - •
Define the request model and write the lending endpoint
Use FastAPI’s
APIRouter,post(), and Pydantic models to accept loan applications. The endpoint should validate input before it reaches your agent or scoring logic.from pydantic import BaseModel, Field from fastapi import HTTPException class LoanApplication(BaseModel): applicant_id: str = Field(..., min_length=3) amount: float = Field(..., gt=0) income: float = Field(..., gt=0) employment_years: int = Field(..., ge=0) @app.post("/lending/apply") async def apply_for_loan(payload: LoanApplication): if db_pool is None: raise HTTPException(status_code=503, detail="Database not ready") risk_score = round((payload.income / payload.amount) * 100 + payload.employment_years * 5, 2) decision = "approved" if risk_score >= 80 else "review" async with db_pool.acquire() as conn: await conn.execute( """ INSERT INTO loan_applications ( applicant_id, amount, income, employment_years, risk_score, decision ) VALUES ($1, $2, $3, $4, $5, $6) """, payload.applicant_id, payload.amount, payload.income, payload.employment_years, risk_score, decision, ) return {"applicant_id": payload.applicant_id, "risk_score": risk_score, "decision": decision} - •
Create the PostgreSQL schema for production persistence
Don’t store agent outputs in memory. Persist every application and decision so you can audit model behavior later. Use a migration tool like Alembic in real deployments; for this tutorial, a plain SQL bootstrap works.
import asyncio CREATE_TABLE_SQL = """ CREATE TABLE IF NOT EXISTS loan_applications ( id BIGSERIAL PRIMARY KEY, applicant_id TEXT NOT NULL, amount NUMERIC(12,2) NOT NULL, income NUMERIC(12,2) NOT NULL, employment_years INT NOT NULL, risk_score NUMERIC(10,2) NOT NULL, decision TEXT NOT NULL, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() ); """ async def init_schema(): async with db_pool.acquire() as conn: await conn.execute(CREATE_TABLE_SQL) await conn.execute("CREATE INDEX IF NOT EXISTS idx_loan_applications_applicant_id ON loan_applications(applicant_id);") @app.on_event("startup") async def startup(): global db_pool db_pool = await asyncpg.create_pool(DATABASE_URL) await init_schema() - •
Add a read endpoint for downstream AI agents
Your agent system usually needs retrieval as much as write access. Add an endpoint that fetches prior lending history so your orchestration layer can make better decisions on repeat applications.
@app.get("/lending/applicants/{applicant_id}/history") async def get_history(applicant_id: str): async with db_pool.acquire() as conn: rows = await conn.fetch( """ SELECT amount, income, employment_years, risk_score, decision, created_at FROM loan_applications WHERE applicant_id = $1 ORDER BY created_at DESC LIMIT 10 """, applicant_id, ) return [ { "amount": float(row["amount"]), "income": float(row["income"]), "employment_years": row["employment_years"], "risk_score": float(row["risk_score"]), "decision": row["decision"], "created_at": row["created_at"].isoformat(), } for row in rows ] - •
Wire the API into an AI agent workflow
In production AI systems, the agent should call the lending API rather than directly touching business tables. That keeps policy enforcement inside FastAPI and leaves PostgreSQL as the source of truth.
import httpx async def evaluate_borrower(applicant_id: str): async with httpx.AsyncClient(base_url="http://localhost:8000") as client: response = await client.get(f"/lending/applicants/{applicant_id}/history") response.raise_for_status() history = response.json() if not history: return {"action": "request_more_data"} latest = history[0] if latest["decision"] == "review": return {"action": "manual_review", "reason": "Recent application under review"} return {"action": "auto_process", "reason": "Historical profile acceptable"}
Testing the Integration
Run the app:
uvicorn main:app --reload --port 8000
Then verify both write and read paths:
import httpx
payload = {
"applicant_id": "cust_1001",
"amount": 5000,
"income": 90000,
"employment_years": 6
}
with httpx.Client(base_url="http://localhost:8000") as client:
r1 = client.post("/lending/apply", json=payload)
print(r1.json())
r2 = client.get("/lending/applicants/cust_1001/history")
print(r2.json())
Expected output:
{
"applicant_id": "cust_1001",
"risk_score": 1140.0,
"decision": "approved"
}
[
{
"amount": 5000.0,
"income": 90000.0,
"employment_years": 6,
"risk_score": 1140.0,
"decision": "approved",
"created_at": "2026-04-21T10:15:30+00:00"
}
]
Real-World Use Cases
- •Loan prequalification agents that collect borrower data through FastAPI and persist every scoring event in PostgreSQL for compliance review.
- •Human-in-the-loop underwriting where the agent fetches historical applications from PostgreSQL before deciding whether to auto-approve or route to an analyst.
- •Audit-ready lending workflows that store prompts, scores, decisions, and timestamps so you can reconstruct why an application was approved or rejected.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit