How to Integrate LangChain for lending with PostgreSQL for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
langchain-for-lendingpostgresqlmulti-agent-systems

Combining LangChain for lending with PostgreSQL gives you a clean way to build multi-agent systems that can reason over loan workflows, persist shared state, and coordinate decisions across agents. In practice, this is what you need for things like loan intake, document verification, underwriting support, and audit trails without stuffing everything into one brittle agent loop.

Prerequisites

  • Python 3.10+
  • A PostgreSQL instance running locally or in your VPC
  • A valid connection string for PostgreSQL
  • LangChain installed with the lending package or your internal lending chain wrapper
  • psycopg2-binary or psycopg installed
  • SQLAlchemy installed for connection management
  • Environment variables configured:
    • DATABASE_URL
    • any LangChain provider keys you use for LLM calls

Install the core packages:

pip install langchain sqlalchemy psycopg2-binary

If your lending workflow uses a specialized LangChain integration package, install that too:

pip install langchain-community langchain-openai

Integration Steps

  1. Set up the PostgreSQL connection

Use SQLAlchemy as the stable layer between your agents and Postgres. For multi-agent systems, this is better than letting each agent open raw connections on its own.

import os
from sqlalchemy import create_engine, text

DATABASE_URL = os.environ["DATABASE_URL"]
engine = create_engine(DATABASE_URL, pool_size=5, max_overflow=10)

with engine.connect() as conn:
    result = conn.execute(text("SELECT 1"))
    print(result.scalar())

This confirms the database is reachable and gives you a pooled engine you can share across agents.

  1. Create a shared schema for agent state

Multi-agent systems need a common place to store case status, extracted borrower data, and decisions. Keep it simple: one table for cases and one for events.

from sqlalchemy import MetaData, Table, Column, Integer, String, JSON, DateTime, Text
from sqlalchemy.sql import func

metadata = MetaData()

loan_cases = Table(
    "loan_cases",
    metadata,
    Column("id", Integer, primary_key=True),
    Column("application_id", String(64), unique=True, nullable=False),
    Column("borrower_name", String(255), nullable=False),
    Column("status", String(50), nullable=False),
    Column("payload", JSON, nullable=False),
    Column("updated_at", DateTime(timezone=True), server_default=func.now(), onupdate=func.now()),
)

agent_events = Table(
    "agent_events",
    metadata,
    Column("id", Integer, primary_key=True),
    Column("application_id", String(64), nullable=False),
    Column("agent_name", String(100), nullable=False),
    Column("event_type", String(100), nullable=False),
    Column("message", Text, nullable=False),
    Column("created_at", DateTime(timezone=True), server_default=func.now()),
)

metadata.create_all(engine)

This schema lets agents write their own outputs while staying coordinated through the same application record.

  1. Wire LangChain into the lending workflow

In a lending system, one agent might extract borrower details from documents while another evaluates policy rules. Here’s a simple example using LangChain’s ChatOpenAI and a prompt-driven chain.

import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(
    model="gpt-4o-mini",
    api_key=os.environ["OPENAI_API_KEY"],
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a lending assistant that extracts structured loan application data."),
    ("user", "Extract borrower name, income estimate, and requested amount from: {document_text}")
])

extract_chain = prompt | llm

response = extract_chain.invoke({
    "document_text": "Applicant: Sarah Khan. Annual income: $120000. Requested loan amount: $350000."
})

print(response.content)

In production, you would usually force structured output with Pydantic models or tool calling so downstream agents can trust the shape of the data.

  1. Persist agent output into PostgreSQL

Once an agent produces a decision or extracted payload, write it to Postgres immediately. That gives you replayability, auditing, and shared state across multiple agents.

from sqlalchemy import insert

application_id = "APP-10001"

with engine.begin() as conn:
    conn.execute(
        insert(loan_cases).values(
            application_id=application_id,
            borrower_name="Sarah Khan",
            status="extracted",
            payload={
                "income_estimate": 120000,
                "requested_amount": 350000,
                "source": "langchain_extraction"
            }
        )
    )

    conn.execute(
        insert(agent_events).values(
            application_id=application_id,
            agent_name="document_extractor",
            event_type="extraction_complete",
            message="Borrower data extracted and stored in PostgreSQL."
        )
    )

This pattern is what makes multi-agent coordination reliable. Each agent writes facts; no agent becomes the source of truth by itself.

  1. Read shared state back into another LangChain agent

A second agent can query Postgres before making an underwriting recommendation. This is where the integration becomes useful: one agent extracts data, another reasons over persisted state.

from sqlalchemy import select

with engine.connect() as conn:
    row = conn.execute(
        select(loan_cases).where(loan_cases.c.application_id == application_id)
    ).mappings().first()

context_text = f"""
Application ID: {row['application_id']}
Borrower: {row['borrower_name']}
Status: {row['status']}
Payload: {row['payload']}
"""

underwrite_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an underwriting assistant. Decide if this file needs manual review."),
    ("user", "{context}")
])

underwrite_chain = underwrite_prompt | llm
decision = underwrite_chain.invoke({"context": context_text})

print(decision.content)

That gives your underwriting agent access to durable shared state instead of relying on in-memory messages that disappear between runs.

Testing the Integration

Run a quick end-to-end check: write a case to Postgres, read it back, then send it through LangChain.

test_application_id = "APP-TEST-001"

with engine.begin() as conn:
    conn.execute(
        insert(loan_cases).values(
            application_id=test_application_id,
            borrower_name="Test Borrower",
            status="pending_review",
            payload={"income_estimate": 95000, "requested_amount": 200000}
        )
        .on_conflict_do_nothing(index_elements=["application_id"])
    )

with engine.connect() as conn:
    record = conn.execute(
        select(loan_cases.c.borrower_name, loan_cases.c.status)
        .where(loan_cases.c.application_id == test_application_id)
    ).first()

print(record)

Expected output:

('Test Borrower', 'pending_review')

If you also invoke the LangChain step successfully, you should see a natural-language underwriting response based on the same record.

Real-World Use Cases

  • Loan intake orchestration

    • One agent extracts applicant data from PDFs or emails.
    • Another validates fields against policy rules.
    • PostgreSQL stores every event for audit and reprocessing.
  • Underwriting support

    • A retrieval agent pulls historical loan cases from Postgres.
    • A reasoning agent uses those records to flag exceptions or missing documents.
    • Human reviewers get a clean summary instead of raw model output.
  • Collections and servicing workflows

    • Agents track payment status changes in Postgres.
    • LangChain agents generate customer-facing follow-up actions based on delinquency rules.
    • The database becomes the coordination layer across all service agents.

The main design rule here is simple: let LangChain handle reasoning and language tasks; let PostgreSQL handle persistence and shared truth. Once you split those responsibilities cleanly, multi-agent lending systems become much easier to test and operate.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides