How to Integrate LangChain for banking with PostgreSQL for production AI
Combining LangChain for banking with PostgreSQL gives you a practical pattern for production AI agents: the model can reason over customer or transaction context, while PostgreSQL stores durable state, audit logs, conversation history, and retrieval-ready data. In banking, that matters because you need traceability, controlled access, and predictable persistence around every agent action.
Prerequisites
- •Python 3.10+
- •PostgreSQL 14+ running locally or in your cloud environment
- •A PostgreSQL database and user with read/write permissions
- •
pipinstalled - •Access to your LangChain for banking package and credentials
- •Environment variables configured for:
- •
DATABASE_URL - •any LangChain provider keys your banking agent uses
- •
- •Optional but recommended:
- •
psycopg2-binaryorpsycopg - •
sqlalchemy - •
langchain - •
langchain-postgresor your PostgreSQL integration package
- •
Integration Steps
1) Install the Python dependencies
Start by installing the packages you need for both the agent runtime and the database connection.
pip install langchain langchain-postgres psycopg2-binary sqlalchemy python-dotenv
If your banking stack uses a vendor-specific LangChain package, install that too. The key point is that you want a LangChain-compatible chat model plus a PostgreSQL driver that can handle production connections.
2) Create a PostgreSQL connection string and test connectivity
Use a real connection string from your environment. In production, keep credentials out of code and use secrets management.
import os
from dotenv import load_dotenv
from sqlalchemy import create_engine, text
load_dotenv()
DATABASE_URL = os.getenv("DATABASE_URL")
engine = create_engine(DATABASE_URL, pool_pre_ping=True)
with engine.connect() as conn:
result = conn.execute(text("SELECT version();"))
print(result.fetchone()[0])
This verifies that your app can reach PostgreSQL before you wire in the agent. If this fails, fix networking, credentials, SSL settings, or firewall rules first.
3) Set up LangChain for banking with persistent message storage
For production AI agents in banking, you do not want stateless prompts only. Persisting conversation state in PostgreSQL lets you resume sessions, audit decisions, and support compliance workflows.
A common pattern is to use PostgreSQL-backed chat history with LangChain’s message history interfaces.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_postgres import PostgresChatMessageHistory
DATABASE_URL = os.environ["DATABASE_URL"]
def get_session_history(session_id: str):
return PostgresChatMessageHistory(
connection_string=DATABASE_URL,
session_id=session_id,
)
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a banking assistant. Follow policy and never expose sensitive data."),
MessagesPlaceholder(variable_name="history"),
("human", "{input}")
])
chain = prompt | llm
agent_with_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="input",
history_messages_key="history",
)
response = agent_with_history.invoke(
{"input": "Summarize the last account inquiry."},
config={"configurable": {"session_id": "customer_123"}}
)
print(response.content)
This is the core integration pattern: LangChain handles orchestration, PostgreSQL stores session state. For banking workloads, this gives you durable memory without depending on in-process state.
4) Store structured banking events in PostgreSQL
Do not rely on chat history alone. Store structured events like risk flags, customer intents, case IDs, and tool outputs in separate tables so downstream systems can query them directly.
from sqlalchemy import Table, Column, Integer, String, MetaData, DateTime, JSON
from sqlalchemy.sql import func
metadata = MetaData()
banking_events = Table(
"banking_events",
metadata,
Column("id", Integer, primary_key=True),
Column("session_id", String(100), nullable=False),
Column("event_type", String(50), nullable=False),
Column("payload", JSON, nullable=False),
Column("created_at", DateTime(timezone=True), server_default=func.now()),
)
metadata.create_all(engine)
event_payload = {
"customer_id": "CUST-10001",
"intent": "card_dispute",
"confidence": 0.94,
}
with engine.begin() as conn:
conn.execute(
banking_events.insert().values(
session_id="customer_123",
event_type="intent_classified",
payload=event_payload,
)
)
This is where production systems differ from demos. You want the LLM output captured as queryable data so fraud teams, support teams, and auditors can inspect it later.
5) Query PostgreSQL inside an agent workflow
Once data is stored in PostgreSQL, your LangChain workflow can pull it back into prompts or tools. A common pattern is to fetch recent events before generating a response.
from sqlalchemy import text
def load_recent_events(session_id: str):
query = text("""
SELECT event_type, payload::text AS payload_text, created_at
FROM banking_events
WHERE session_id = :session_id
ORDER BY created_at DESC
LIMIT 5
""")
with engine.connect() as conn:
rows = conn.execute(query, {"session_id": session_id}).fetchall()
return [
{
"event_type": row.event_type,
"payload": row.payload_text,
"created_at": str(row.created_at),
}
for row in rows
]
recent_events = load_recent_events("customer_123")
print(recent_events)
You can pass this data into your prompt template or tool layer to make responses aware of prior actions without stuffing everything into the conversation window.
Testing the Integration
Run a simple end-to-end check:
test_session = "integration_test_001"
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO banking_events (session_id, event_type, payload)
VALUES (:session_id, :event_type, :payload::json)
"""),
{
"session_id": test_session,
"event_type": "test_event",
"payload": '{"status":"ok","source":"integration_test"}',
},
)
with engine.connect() as conn:
row = conn.execute(
text("""
SELECT event_type, payload::text
FROM banking_events
WHERE session_id = :session_id
ORDER BY created_at DESC
LIMIT 1
"""),
{"session_id": test_session},
).fetchone()
print(row)
Expected output:
('test_event', '{"status":"ok","source":"integration_test"}')
If you also wired up RunnableWithMessageHistory, send two messages with the same session_id and confirm the second response has access to prior context stored in PostgreSQL.
Real-World Use Cases
- •
Customer service agents with audit trails
- •Keep conversation history in PostgreSQL while using LangChain for routing intents like card replacement, balance questions, or dispute initiation.
- •
Fraud triage assistants
- •Store risk signals and investigation notes in PostgreSQL.
- •Use LangChain to summarize cases and recommend next actions based on structured event history.
- •
Relationship manager copilots
- •Pull account activity from PostgreSQL.
- •Use LangChain to generate client summaries before meetings without exposing raw PII beyond approved policy boundaries.
The main pattern is simple: let LangChain handle reasoning and tool orchestration; let PostgreSQL handle persistence and system-of-record duties. That split keeps your AI agent maintainable when you move from prototype to production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit