How to Integrate LangChain for banking with PostgreSQL for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
langchain-for-bankingpostgresqlmulti-agent-systems

Combining LangChain for banking with PostgreSQL gives you a practical control plane for multi-agent systems in regulated environments. LangChain handles orchestration, tool routing, and agent memory, while PostgreSQL gives you durable state, auditability, and shared context across agents. That combo is what you want when one agent is checking balances, another is validating KYC data, and a third is writing an immutable trace of the interaction.

Prerequisites

  • Python 3.10+
  • A PostgreSQL 14+ instance running locally or in your VPC
  • A database user with CREATE, INSERT, SELECT, and UPDATE permissions
  • LangChain installed with the banking integration package you use internally
  • psycopg2-binary or psycopg installed
  • Environment variables configured:
    • DATABASE_URL
    • bank API credentials required by your LangChain banking connector
  • A schema ready for:
    • agent state
    • message history
    • audit logs

Integration Steps

  1. Install the dependencies

    Keep the Python side simple. You need LangChain, your banking connector package, and a PostgreSQL driver.

    pip install langchain psycopg2-binary sqlalchemy
    

    If your banking integration is exposed as a LangChain tool or chat model wrapper, install that package too.

  2. Create the PostgreSQL tables for multi-agent state

    Don’t store agent memory in app process memory. For multi-agent systems, shared state belongs in Postgres so every worker sees the same truth.

    import os
    from sqlalchemy import create_engine, text
    
    DATABASE_URL = os.environ["DATABASE_URL"]
    engine = create_engine(DATABASE_URL)
    
    ddl = """
    CREATE TABLE IF NOT EXISTS agent_sessions (
        session_id TEXT PRIMARY KEY,
        customer_id TEXT NOT NULL,
        status TEXT NOT NULL DEFAULT 'active',
        created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
        updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    );
    
    CREATE TABLE IF NOT EXISTS agent_messages (
        id BIGSERIAL PRIMARY KEY,
        session_id TEXT NOT NULL REFERENCES agent_sessions(session_id),
        agent_name TEXT NOT NULL,
        role TEXT NOT NULL,
        content TEXT NOT NULL,
        created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    );
    
    CREATE TABLE IF NOT EXISTS audit_events (
        id BIGSERIAL PRIMARY KEY,
        session_id TEXT NOT NULL REFERENCES agent_sessions(session_id),
        event_type TEXT NOT NULL,
        payload JSONB NOT NULL,
        created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    );
    """
    
    with engine.begin() as conn:
        conn.execute(text(ddl))
    
  3. Wire LangChain banking tools into your agent

    The pattern here is: expose banking actions as tools, then let the agent decide when to call them. If your banking SDK returns LangChain-compatible tools, register them directly.

    import os
    from langchain_openai import ChatOpenAI
    from langchain.agents import initialize_agent, AgentType
    from langchain.tools import Tool
    
    # Replace this with your actual banking SDK/tool constructor.
    def get_account_balance(customer_id: str) -> str:
        # Example placeholder for a real bank API call.
        return f"Customer {customer_id} balance: 12500.42 USD"
    
    balance_tool = Tool(
        name="get_account_balance",
        func=get_account_balance,
        description="Fetches the current account balance for a customer_id."
    )
    
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
    
    agent = initialize_agent(
        tools=[balance_tool],
        llm=llm,
        agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
        verbose=True,
    )
    
  4. Persist each agent turn to PostgreSQL

    Multi-agent systems fail when you can’t reconstruct who said what and why. Write every tool call and response into Postgres so other agents can pick it up later.

    import json
    from sqlalchemy import text
    
     def save_message(session_id: str, agent_name: str, role: str, content: str):
         with engine.begin() as conn:
             conn.execute(
                 text("""
                     INSERT INTO agent_messages (session_id, agent_name, role, content)
                     VALUES (:session_id, :agent_name, :role, :content)
                 """),
                 {
                     "session_id": session_id,
                     "agent_name": agent_name,
                     "role": role,
                     "content": content,
                 },
             )
    
     def save_audit_event(session_id: str, event_type: str, payload: dict):
         with engine.begin() as conn:
             conn.execute(
                 text("""
                     INSERT INTO audit_events (session_id, event_type, payload)
                     VALUES (:session_id, :event_type, :payload::jsonb)
                 """),
                 {
                     "session_id": session_id,
                     "event_type": event_type,
                     "payload": json.dumps(payload),
                 },
             )
    
  5. Orchestrate multiple agents against shared Postgres state

    One common pattern is a coordinator agent that delegates work to specialist agents. The coordinator reads the session context from PostgreSQL and writes back the result after each step.

    from sqlalchemy import text
    
     def create_session(session_id: str, customer_id: str):
         with engine.begin() as conn:
             conn.execute(
                 text("""
                     INSERT INTO agent_sessions (session_id, customer_id)
                     VALUES (:session_id, :customer_id)
                     ON CONFLICT (session_id) DO NOTHING
                 """),
                 {"session_id": session_id, "customer_id": customer_id},
             )
    
     def load_recent_messages(session_id: str):
         with engine.begin() as conn:
             rows = conn.execute(
                 text("""
                     SELECT agent_name, role, content
                     FROM agent_messages
                     WHERE session_id = :session_id
                     ORDER BY created_at ASC
                 """),
                 {"session_id": session_id},
             ).fetchall()
         return rows
    
     def run_banking_flow(session_id: str, customer_query: str):
         history = load_recent_messages(session_id)
         save_message(session_id, "coordinator", "user", customer_query)
    
         result = agent.invoke({"input": customer_query})
         save_message(session_id, "coordinator", "assistant", result["output"])
         save_audit_event(session_id, "agent_response", {"output": result["output"]})
         return result["output"]
    

Testing the Integration

Run a basic end-to-end check: create a session, store a message, invoke the banking tool through LangChain logic, then verify Postgres has the trace.

session_id = "sess_1001"
customer_id = "cust_7788"

create_session(session_id=session_id, customer_id=customer_id)

response = run_banking_flow(
    session_id=session_id,
    customer_query=f"Check balance for customer {customer_id}"
)

print(response)

with engine.begin() as conn:
    count = conn.execute(
        text("SELECT COUNT(*) FROM agent_messages WHERE session_id = :session_id"),
        {"session_id": session_id},
    ).scalar_one()

print(f"Stored messages: {count}")

Expected output:

Customer cust_7788 balance: 12500.42 USD
Stored messages: 2

If you see both the model response and persisted rows in PostgreSQL, the integration is working.

Real-World Use Cases

  • Customer service triage

    • One agent retrieves balances and transaction summaries.
    • Another checks policy or account eligibility.
    • PostgreSQL stores every decision for audit and escalation.
  • Fraud investigation workflows

    • A monitoring agent flags suspicious activity.
    • A review agent pulls historical context from Postgres.
    • A compliance agent writes findings back to an immutable audit table.
  • Loan or claims processing

    • One agent collects documents.
    • Another validates financial data through banking tools.
    • PostgreSQL keeps shared workflow state across retries and handoffs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides