How to Integrate LangChain for lending with PostgreSQL for production AI

By Cyprian AaronsUpdated 2026-04-21
langchain-for-lendingpostgresqlproduction-ai

Combining LangChain for lending with PostgreSQL gives you a practical pattern for building loan-focused AI agents that can retrieve policy data, score applications, and persist every decision trail in a database you can trust. In production, that matters because lending workflows need both reasoning and auditability: the model can infer next actions, while PostgreSQL stores customer records, prompts, responses, and approval metadata.

Prerequisites

  • Python 3.10+
  • A running PostgreSQL instance
  • A PostgreSQL user with permissions to create tables and write rows
  • LangChain installed with the lending package or your lending-specific chain implementation
  • psycopg or psycopg2-binary
  • Environment variables configured:
    • POSTGRES_HOST
    • POSTGRES_PORT
    • POSTGRES_DB
    • POSTGRES_USER
    • POSTGRES_PASSWORD
  • Access to your lending model endpoint or LLM provider used by LangChain

Integration Steps

  1. Set up your PostgreSQL connection string.

Use a single connection URI so your app, workers, and tests all point at the same database config.

import os

POSTGRES_URI = (
    f"postgresql+psycopg://{os.environ['POSTGRES_USER']}:"
    f"{os.environ['POSTGRES_PASSWORD']}@"
    f"{os.environ['POSTGRES_HOST']}:"
    f"{os.environ.get('POSTGRES_PORT', '5432')}/"
    f"{os.environ['POSTGRES_DB']}"
)

print(POSTGRES_URI)

For production, keep credentials in a secret manager. Do not hardcode them into the agent process.

  1. Create the lending tables you need for agent memory and audit logs.

You want a schema that captures both business state and model output. That means application data, conversation history, and decision traces.

import psycopg

DDL = """
CREATE TABLE IF NOT EXISTS loan_applications (
    id UUID PRIMARY KEY,
    applicant_name TEXT NOT NULL,
    amount_requested NUMERIC(12,2) NOT NULL,
    income NUMERIC(12,2) NOT NULL,
    status TEXT NOT NULL DEFAULT 'pending',
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE TABLE IF NOT EXISTS agent_audit_log (
    id BIGSERIAL PRIMARY KEY,
    application_id UUID REFERENCES loan_applications(id),
    prompt TEXT NOT NULL,
    response TEXT NOT NULL,
    model_name TEXT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
"""

with psycopg.connect(
    host=os.environ["POSTGRES_HOST"],
    port=os.environ.get("POSTGRES_PORT", "5432"),
    dbname=os.environ["POSTGRES_DB"],
    user=os.environ["POSTGRES_USER"],
    password=os.environ["POSTGRES_PASSWORD"],
) as conn:
    with conn.cursor() as cur:
        cur.execute(DDL)
        conn.commit()

This is the minimum structure I use for regulated workflows: one table for source-of-truth business objects, one table for immutable inference logs.

  1. Build the LangChain lending chain and inject PostgreSQL-backed context.

If you are using a lending-specific chain, treat PostgreSQL as the retrieval layer for customer facts and policy constraints. The chain then reasons over those facts and returns a structured decision.

import json
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_community.utilities.sql_database import SQLDatabase

db = SQLDatabase.from_uri(POSTGRES_URI)

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a lending assistant. Return JSON with keys: decision, risk_band, reason."),
    ("human", "Application data:\n{application_json}\n\nPolicy context:\n{policy_rows}")
])

def get_policy_context():
    rows = db.run(
        "SELECT name, value FROM lending_policies ORDER BY name LIMIT 20"
    )
    return rows

def evaluate_application(application):
    policy_rows = get_policy_context()
    chain_input = {
        "application_json": json.dumps(application),
        "policy_rows": str(policy_rows),
    }
    return prompt | llm

chain = evaluate_application({
    "applicant_name": "Maya Chen",
})

If your lending package exposes a dedicated runnable or chain class, keep the same pattern: fetch policy/customer facts from PostgreSQL first, then pass them into the LangChain prompt or tool invocation.

  1. Persist the model result back into PostgreSQL.

The agent should not just answer; it should write its output so downstream systems can consume it without re-running inference.

import uuid

application_id = uuid.uuid4()

application = {
    "id": application_id,
    "applicant_name": "Maya Chen",
    "amount_requested": 25000,
    "income": 90000,
}

result = {
    "decision": "approve",
    "risk_band": "medium",
    "reason": "Debt-to-income ratio is within policy threshold.",
}

with psycopg.connect(
    host=os.environ["POSTGRES_HOST"],
    port=os.environ.get("POSTGRES_PORT", "5432"),
    dbname=os.environ["POSTGRES_DB"],
    user=os.environ["POSTGRES_USER"],
    password=os.environ["POSTGRES_PASSWORD"],
) as conn:
        with conn.cursor() as cur:
            cur.execute(
                """
                INSERT INTO loan_applications (id, applicant_name, amount_requested, income, status)
                VALUES (%s, %s, %s, %s, %s)
                """,
                (application_id, application["applicant_name"], application["amount_requested"], application["income"], result["decision"]),
            )
            cur.execute(
                """
                INSERT INTO agent_audit_log (application_id, prompt, response, model_name)
                VALUES (%s, %s, %s, %s)
                """,
                (application_id, json.dumps(application), json.dumps(result), "gpt-4o-mini"),
            )
            conn.commit()

That pattern gives you traceability for compliance reviews and debugging. If an underwriter asks why the system approved a file, you have both input state and generated rationale.

  1. Wrap it in a transaction-safe service function.

Production systems fail in the middle of writes. Keep database updates atomic so you never store an approval without its audit record.

def process_loan_application(application):
    with psycopg.connect(
        host=os.environ["POSTGRES_HOST"],
        port=os.environ.get("POSTGRES_PORT", "5432"),
        dbname=os.environ["POSTGRES_DB"],
        user=os.environ["POSTGRES_USER"],
        password=os.environ["POSTGRES_PASSWORD"],
        autocommit=False,
    ) as conn:
        with conn.cursor() as cur:
            cur.execute("SELECT COUNT(*) FROM loan_applications WHERE id = %s", (application["id"],))
            exists = cur.fetchone()[0] > 0
            if exists:
                raise ValueError("Duplicate application id")

            # Replace this with your LangChain invocation.
            decision = {"decision": "approve", "risk_band": "low", "reason": "Stable income"}

            cur.execute(
                """
                INSERT INTO loan_applications (id, applicant_name, amount_requested, income, status)
                VALUES (%s,%s,%s,%s,%s)
                """,
                (
                    application["id"],
                    application["applicant_name"],
                    application["amount_requested"],
                    application["income"],
                    decision["decision"],
                ),
            )

            cur.execute(
                """
                INSERT INTO agent_audit_log (application_id, prompt, response, model_name)
                VALUES (%s,%s,%s,%s)
                """,
                (
                    application["id"],
                    json.dumps(application),
                    json.dumps(decision),
                    "gpt-4o-mini",
                ),
            )
        conn.commit()

Testing the Integration

Run a smoke test that inserts one application and reads it back from PostgreSQL.

test_application = {
    "id": uuid.uuid4(),
    "applicant_name": "Jordan Patel",
}
process_loan_application({
    **test_application,
})
with psycopg.connect(POSTGRES_URI.replace("postgresql+psycopg://", "")) as conn:
   with conn.cursor() as cur:
      cur.execute("SELECT applicant_name,status FROM loan_applications ORDER BY created_at DESC LIMIT 1")
      row = cur.fetchone()
      print(row)

Expected output:

('Jordan Patel', 'approve')

If you also query agent_audit_log, you should see one row tied to that application ID with the serialized prompt and response.

Real-World Use Cases

  • Pre-screening personal loan applications by combining policy retrieval from PostgreSQL with LangChain-based reasoning.
  • Generating underwriter summaries that pull borrower history from PostgreSQL and produce structured explanations for review queues.
  • Building an audit-ready agent that stores every prompt/response pair alongside final credit decisions for compliance teams.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides