How to Integrate LangChain for investment banking with PostgreSQL for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
langchain-for-investment-bankingpostgresqlmulti-agent-systems

Combining LangChain for investment banking with PostgreSQL gives you a clean pattern for building agent systems that can reason over market data, deal documents, and internal workflows while persisting state, audit trails, and shared memory in a relational store. For multi-agent systems, this matters because one agent can extract facts from filings, another can validate them against stored deal context, and PostgreSQL becomes the source of truth between runs.

Prerequisites

  • Python 3.10+
  • A PostgreSQL 14+ instance running locally or in your cloud environment
  • A database user with CREATE, INSERT, SELECT, and UPDATE permissions
  • LangChain installed with the community integrations:
    • langchain
    • langchain-community
    • langchain-openai or another chat model provider
  • PostgreSQL driver:
    • psycopg2-binary
  • Environment variables configured:
    • OPENAI_API_KEY
    • POSTGRES_URL or individual DB connection settings
  • A clear schema for agent memory, such as:
    • conversations
    • deal_notes
    • extracted_entities
    • task_status

Integration Steps

1) Install dependencies and verify the database connection

Start by installing the packages your agents will use. Keep the database connection string in an environment variable so you can reuse it across services.

pip install langchain langchain-community langchain-openai psycopg2-binary sqlalchemy
import os
from sqlalchemy import create_engine, text

POSTGRES_URL = os.environ["POSTGRES_URL"]
engine = create_engine(POSTGRES_URL)

with engine.connect() as conn:
    result = conn.execute(text("SELECT version();"))
    print(result.fetchone()[0])

If that prints a PostgreSQL version string, your base connectivity is good.

2) Create tables for multi-agent memory and deal context

For investment banking workflows, keep structured records instead of stuffing everything into raw chat history. You want agents to store extracted entities, analyst notes, and task state separately.

from sqlalchemy import create_engine, text

engine = create_engine(POSTGRES_URL)

schema_sql = """
CREATE TABLE IF NOT EXISTS agent_messages (
    id SERIAL PRIMARY KEY,
    session_id TEXT NOT NULL,
    agent_name TEXT NOT NULL,
    role TEXT NOT NULL,
    content TEXT NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE IF NOT EXISTS deal_entities (
    id SERIAL PRIMARY KEY,
    deal_id TEXT NOT NULL,
    entity_type TEXT NOT NULL,
    entity_value TEXT NOT NULL,
    source TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);
"""

with engine.begin() as conn:
    conn.execute(text(schema_sql))

This gives each agent a durable record of what it said and what it learned about a deal.

3) Build a LangChain agent that writes outputs into PostgreSQL

Use LangChain’s chat model wrapper to generate analysis, then persist the result to PostgreSQL. In a real banking workflow, one agent might summarize an earnings call and another might store key metrics for downstream validation.

import os
from sqlalchemy import create_engine, text
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
engine = create_engine(os.environ["POSTGRES_URL"])

prompt = """
You are an investment banking analyst.
Extract these fields from the text:
- company_name
- transaction_type
- key_risk
Return compact JSON only.
"""

deal_text = """
Acme Corp is evaluating a $250M acquisition of Beta Systems.
Main risk: customer concentration in two enterprise accounts.
"""

response = llm.invoke(prompt + "\n\nTEXT:\n" + deal_text)
analysis = response.content

with engine.begin() as conn:
    conn.execute(
        text("""
            INSERT INTO agent_messages (session_id, agent_name, role, content)
            VALUES (:session_id, :agent_name, :role, :content)
        """),
        {
            "session_id": "deal-001",
            "agent_name": "extractor",
            "role": "assistant",
            "content": analysis,
        },
    )

print(analysis)

The important part here is that LangChain handles generation through ChatOpenAI.invoke(), while PostgreSQL stores the output for later retrieval by other agents.

4) Add retrieval logic so other agents can read shared context

A multi-agent system needs shared memory. The next agent should be able to pull prior outputs from PostgreSQL before making decisions.

from sqlalchemy import create_engine, text

engine = create_engine(POSTGRES_URL)

def get_deal_context(deal_id: str):
    query = text("""
        SELECT entity_type, entity_value, source, created_at
        FROM deal_entities
        WHERE deal_id = :deal_id
        ORDER BY created_at DESC
    """)
    with engine.connect() as conn:
        rows = conn.execute(query, {"deal_id": deal_id}).fetchall()
    return [
        {
            "entity_type": row.entity_type,
            "entity_value": row.entity_value,
            "source": row.source,
            "created_at": row.created_at.isoformat(),
        }
        for row in rows
    ]

Now wire that context into a second LangChain call. This is where coordination between agents starts to pay off.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

context = get_deal_context("deal-001")

message = f"""
You are the risk review agent.
Use this stored context to assess whether the acquisition should proceed:

{context}

Return:
- decision
- rationale
- follow_up_questions
"""

result = llm.invoke(message)
print(result.content)

5) Store structured entities instead of only free-text notes

For banking workflows, store normalized data so downstream agents can query it without parsing prose. That makes reconciliation and audit easier.

from sqlalchemy import create_engine, text

engine = create_engine(POSTGRES_URL)

entities = [
    ("deal-001", "company_name", "Acme Corp", "earnings_call"),
    ("deal-001", "transaction_type", "$250M acquisition", "analyst_summary"),
    ("deal-001", "key_risk", "customer concentration", "risk_agent"),
]

insert_sql = text("""
INSERT INTO deal_entities (deal_id, entity_type, entity_value, source)
VALUES (:deal_id, :entity_type, :entity_value, :source)
""")

with engine.begin() as conn:
    for deal_id, entity_type, entity_value, source in entities:
        conn.execute(
            insert_sql,
            {
                "deal_id": deal_id,
                "entity_type": entity_type,
                "entity_value": entity_value,
                "source": source,
            },
        )

That pattern scales better than dumping everything into one conversation table.

Testing the Integration

Run a simple end-to-end check: write one record with LangChain output and read it back from PostgreSQL.

from sqlalchemy import create_engine, text

engine = create_engine(POSTGRES_URL)

with engine.connect() as conn:
    rows = conn.execute(
        text("""
            SELECT session_id, agent_name, role, content
            FROM agent_messages
            WHERE session_id = 'deal-001'
            ORDER BY created_at DESC
            LIMIT 1
        """)
    ).fetchall()

print(rows[0])

Expected output:

('deal-001', 'extractor', 'assistant', '{"company_name":"Acme Corp","transaction_type":"acquisition","key_risk":"customer concentration"}')

If you see a row like that returned from Postgres after a LangChain call wrote it there, the integration is working.

Real-World Use Cases

  • Deal diligence assistant

    • One agent extracts facts from CIMs and earnings calls.
    • Another stores them in PostgreSQL.
    • A third compares extracted risks against historical deals.
  • Multi-agent research desk

    • One agent monitors filings and press releases.
    • Another summarizes impact on comparable companies.
    • PostgreSQL keeps shared research notes across analysts and sessions.
  • Investment committee prep

    • Agents draft memo sections from stored notes and structured entities.
    • PostgreSQL holds approval status, comments, and redlines.
    • You get an auditable workflow instead of scattered chat logs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides