How to Integrate LangChain for banking with PostgreSQL for AI agents

By Cyprian AaronsUpdated 2026-04-21
langchain-for-bankingpostgresqlai-agents

Combining LangChain for banking with PostgreSQL gives you a practical agent backend: the model can reason over banking workflows, while PostgreSQL stores customer state, audit trails, transaction context, and retrieval data. That’s the pattern you want when building agents that need persistence, traceability, and controlled access to financial records.

The useful part is not just “chat over data.” It’s letting an agent fetch account context, write structured events, and keep conversation memory in a database you already know how to operate in production.

Prerequisites

  • Python 3.10+
  • A running PostgreSQL instance
  • psycopg or psycopg2-binary
  • LangChain installed
  • Your banking-specific LangChain package or internal wrapper configured
  • API key or credentials for your LLM provider
  • A PostgreSQL database and user with read/write permissions
  • Network access from your app to PostgreSQL

Install the core packages:

pip install langchain langchain-community langchain-postgres psycopg[binary]

If your banking stack uses a custom LangChain integration layer, make sure the agent tools expose standard LangChain interfaces like Tool, Runnable, or BaseChatModel.

Integration Steps

  1. Create the PostgreSQL connection

Use a SQLAlchemy-style connection string or a direct psycopg connection depending on the component you’re wiring up. For LangChain persistence layers, PostgreSQL is usually easiest through langchain-postgres.

from sqlalchemy import create_engine

POSTGRES_URI = "postgresql+psycopg://agent_user:agent_pass@localhost:5432/banking_ai"
engine = create_engine(POSTGRES_URI)

with engine.connect() as conn:
    result = conn.execute("SELECT 1")
    print(result.scalar())

This confirms your app can reach the database before you wire it into the agent.

  1. Set up a PostgreSQL-backed chat history store

For AI agents, chat history should not live only in memory. Use PostgreSQL so you can recover sessions, audit prompts, and continue workflows after restarts.

from langchain_postgres import PostgresChatMessageHistory

history = PostgresChatMessageHistory(
    connection_string="postgresql+psycopg://agent_user:agent_pass@localhost:5432/banking_ai",
    session_id="customer_1234"
)

history.add_user_message("Check my last three card transactions.")
history.add_ai_message("I can help with that. Let me look up your transaction summary.")
print(history.messages)

This gives you durable session state keyed by session_id. In banking systems, that ID should map to a verified customer session, not just a random UUID.

  1. Connect LangChain for banking tools to retrieve account data

Your banking agent typically needs tools for account lookup, balance checks, transaction summaries, or policy-aware actions. Wrap those operations as LangChain tools so the model can call them explicitly.

from langchain_core.tools import tool

@tool
def get_account_summary(customer_id: str) -> str:
    """
    Fetch a customer's high-level banking summary from the operational database.
    """
    with engine.connect() as conn:
        rows = conn.execute(
            """
            SELECT account_number, account_type, current_balance
            FROM accounts
            WHERE customer_id = %s
            ORDER BY account_number
            """,
            (customer_id,)
        ).fetchall()

    return "\n".join(
        f"{r[0]} | {r[1]} | {r[2]}"
        for r in rows
    ) or "No accounts found."

In production, keep this tool read-only unless there is a separate authorization layer for write actions.

  1. Build the agent with both memory and tools

Now connect the tool and PostgreSQL-backed history into an agent loop. The exact class names vary by your LangChain setup, but the pattern is consistent: model + tools + persistent memory.

from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

tools = [get_account_summary]

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)

response = agent.invoke({
    "input": "Show me the account summary for customer_1234"
})

print(response["output"])

If your banking package exposes its own helper like create_banking_agent(...), keep the same structure underneath: persistent history in PostgreSQL and explicit bank-domain tools.

  1. Persist interaction events for auditability

Banking systems need traceability. Store important events such as tool calls, decisions, and final outputs in a dedicated table.

from datetime import datetime

def log_agent_event(session_id: str, event_type: str, payload: str):
    with engine.begin() as conn:
        conn.execute(
            """
            INSERT INTO agent_events (session_id, event_type, payload, created_at)
            VALUES (%s, %s, %s, %s)
            """,
            (session_id, event_type, payload, datetime.utcnow())
        )

log_agent_event(
    "customer_1234",
    "tool_call",
    "get_account_summary(customer_1234)"
)

This is where PostgreSQL earns its place: durable logs that compliance teams can query without touching application memory.

Testing the Integration

Run a simple end-to-end check: save a message to history, call the bank tool through the agent path, and verify that both DB writes and responses work.

test_session = PostgresChatMessageHistory(
    connection_string="postgresql+psycopg://agent_user:agent_pass@localhost:5432/banking_ai",
    session_id="test_session_001"
)

test_session.add_user_message("What is my balance?")
print(len(test_session.messages))

result = get_account_summary.invoke({"customer_id": "customer_1234"})
print(result)

Expected output:

1
ACC1001 | checking | 2450.75
ACC2003 | savings | 18000.00

If the first line prints 1, your PostgreSQL chat history is working. If the second block returns rows from accounts, your tool integration is wired correctly.

Real-World Use Cases

  • Customer service agents
    • Answer balance questions, recent transaction queries, and fee explanations while keeping full conversation state in PostgreSQL.
  • Ops and compliance assistants
    • Track every tool call and decision path for audit review without relying on volatile app memory.
  • Personalized banking workflows
    • Build agents that remember customer preferences across sessions and route them through policy-safe bank operations.

The main rule here is simple: let LangChain handle reasoning and orchestration; let PostgreSQL handle state and auditability. That split keeps your AI agent system maintainable when it moves from prototype to regulated production.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides