How to Integrate LangChain for lending with PostgreSQL for AI agents

By Cyprian AaronsUpdated 2026-04-21
langchain-for-lendingpostgresqlai-agents

LangChain for lending gives you the orchestration layer for loan workflows: intake, document Q&A, policy checks, and agent-driven decision support. PostgreSQL gives you the durable system of record for applications, borrower profiles, underwriting notes, and audit trails. Put them together and you get an AI agent that can reason over lending context while persisting every decision path in a database you can trust.

Prerequisites

  • Python 3.10+
  • A PostgreSQL instance running locally or in your cloud environment
  • A database and user with read/write permissions
  • pip installed
  • Access to a LangChain-compatible lending workflow package or your own lending tools wrapped as LangChain tools
  • An embedding model or LLM provider configured if your agent uses retrieval or generation
  • Environment variables set for:
    • POSTGRES_HOST
    • POSTGRES_PORT
    • POSTGRES_DB
    • POSTGRES_USER
    • POSTGRES_PASSWORD

Install the core dependencies:

pip install langchain langchain-community psycopg2-binary sqlalchemy python-dotenv

Integration Steps

1) Create the PostgreSQL schema for lending data

Start by creating tables for applications and agent audit logs. Keep it simple and explicit; AI agents need structured storage, not vague blobs everywhere.

import os
import psycopg2

conn = psycopg2.connect(
    host=os.environ["POSTGRES_HOST"],
    port=os.environ["POSTGRES_PORT"],
    dbname=os.environ["POSTGRES_DB"],
    user=os.environ["POSTGRES_USER"],
    password=os.environ["POSTGRES_PASSWORD"],
)

cur = conn.cursor()

cur.execute("""
CREATE TABLE IF NOT EXISTS loan_applications (
    id SERIAL PRIMARY KEY,
    applicant_name TEXT NOT NULL,
    amount_requested NUMERIC(12,2) NOT NULL,
    income NUMERIC(12,2) NOT NULL,
    credit_score INT NOT NULL,
    status TEXT NOT NULL DEFAULT 'pending',
    created_at TIMESTAMP DEFAULT NOW()
);
""")

cur.execute("""
CREATE TABLE IF NOT EXISTS agent_audit_log (
    id SERIAL PRIMARY KEY,
    application_id INT REFERENCES loan_applications(id),
    event_type TEXT NOT NULL,
    event_payload JSONB NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);
""")

conn.commit()
cur.close()
conn.close()

This schema gives your agent two things:

  • a source of truth for loan state
  • an immutable log of what the agent saw and decided

2) Connect LangChain to PostgreSQL as a queryable store

For lending workflows, the common pattern is to let the agent retrieve structured context from Postgres before making a decision. LangChain’s SQL tooling works well here through SQLDatabase.

from langchain_community.utilities import SQLDatabase

db = SQLDatabase.from_uri(
    "postgresql+psycopg2://postgres:password@localhost:5432/lending_db"
)

print(db.get_usable_table_names())
print(db.run("SELECT status, COUNT(*) FROM loan_applications GROUP BY status;"))

This is the bridge between your agent and your operational data. The agent can now query application records, counts by status, or underwriting history without custom DB plumbing every time.

3) Wrap lending logic as LangChain tools

If you already have lending-specific logic — affordability checks, policy rules, fraud flags — expose it as tools. LangChain agents call tools directly using structured inputs.

from langchain_core.tools import tool
import psycopg2
import os

@tool
def get_application_summary(application_id: int) -> str:
    """Fetch a loan application summary from PostgreSQL."""
    conn = psycopg2.connect(
        host=os.environ["POSTGRES_HOST"],
        port=os.environ["POSTGRES_PORT"],
        dbname=os.environ["POSTGRES_DB"],
        user=os.environ["POSTGRES_USER"],
        password=os.environ["POSTGRES_PASSWORD"],
    )
    cur = conn.cursor()
    cur.execute(
        """
        SELECT applicant_name, amount_requested, income, credit_score, status
        FROM loan_applications
        WHERE id = %s
        """,
        (application_id,),
    )
    row = cur.fetchone()
    cur.close()
    conn.close()

    if not row:
        return f"Application {application_id} not found"

    return (
        f"Applicant={row[0]}, Amount={row[1]}, Income={row[2]}, "
        f"CreditScore={row[3]}, Status={row[4]}"
    )

Now your lender-facing assistant can call this tool before drafting recommendations or asking for missing documents.

4) Build the agent with LangChain and bind it to Postgres-backed tools

Use a chat model plus tools to create an agent that can inspect applications and write back audit events. The exact model class depends on your provider; the pattern stays the same.

from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a lending operations assistant. Use tools when needed."),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

tools = [get_application_summary]

agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({
    "input": "Review application 1 and summarize the risk factors."
})

print(result["output"])

If your workflow needs persistence after each decision, add a second tool that inserts into agent_audit_log. That keeps human review and compliance teams happy because every action is traceable.

5) Persist agent decisions back into PostgreSQL

Agents should not just read data. In lending systems they need to leave an audit trail every time they classify risk or request more documents.

@tool
def write_audit_event(application_id: int, event_type: str, event_payload: dict) -> str:
    """Write an audit event into PostgreSQL."""
    conn = psycopg2.connect(
        host=os.environ["POSTGRES_HOST"],
        port=os.environ["POSTGRES_PORT"],
        dbname=os.environ["POSTGRES_DB"],
        user=os.environ["POSTGRES_USER"],
        password=os.environ["POSTGRES_PASSWORD"],
    )
    cur = conn.cursor()
    cur.execute(
        """
        INSERT INTO agent_audit_log (application_id, event_type, event_payload)
        VALUES (%s, %s, %s::jsonb)
        RETURNING id;
        """,
        (application_id, event_type, json.dumps(event_payload)),
    )
    log_id = cur.fetchone()[0]
    conn.commit()
    cur.close()
    conn.close()
    return f"Audit event stored with id={log_id}"

In production you would call this tool after scoring risk or generating a recommendation.

Testing the Integration

Run a simple end-to-end check: insert one application row, fetch it through LangChain tooling, and confirm the output matches what is in PostgreSQL.

import psycopg2
import os

conn = psycopg2.connect(
    host=os.environ["POSTGRES_HOST"],
    port=os.environ["POSTGRES_PORT"],
    dbname=os.environ["POSTGRES_DB"],
    user=os.environ["POSTGRES_USER"],
    password=os.environ["POSTGRES_PASSWORD"],
)
cur = conn.cursor()

cur.execute("""
INSERT INTO loan_applications (applicant_name, amount_requested, income, credit_score)
VALUES ('Amina Patel', 25000.00, 82000.00, 742)
RETURNING id;
""")
application_id = cur.fetchone()[0]
conn.commit()

cur.close()
conn.close()

print(get_application_summary.invoke({"application_id": application_id}))

Expected output:

Applicant=Amina Patel, Amount=25000.0, Income=82000.0, CreditScore=742, Status=pending

If that works end-to-end, your LangChain tool layer is reading live lending data from PostgreSQL correctly.

Real-World Use Cases

  • Loan triage assistant

    • Pull borrower data from PostgreSQL.
    • Have LangChain summarize risk indicators and flag missing docs.
    • Write decisions back to an audit table.
  • Underwriting copilot

    • Query historical approvals and rejections.
    • Compare current applications against internal policy thresholds.
    • Generate underwriter notes with full traceability.
  • Customer servicing agent

    • Retrieve repayment schedules and account status from Postgres.
    • Answer borrower questions about balances or next payment dates.
    • Log every interaction for compliance review.

The pattern is straightforward: PostgreSQL stores the facts, LangChain orchestrates reasoning over those facts. For lending systems that means fewer brittle scripts and a cleaner path to auditable AI automation.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides