How to Integrate LangChain for retail banking with PostgreSQL for production AI
Combining LangChain for retail banking with PostgreSQL gives you a clean production pattern for bank-grade AI agents: the LLM handles reasoning, while Postgres stores customer context, conversation state, audit trails, and retrieval data. That means you can build assistants that answer account questions, summarize service history, route cases, and keep everything queryable and compliant.
Prerequisites
- •Python 3.10+
- •PostgreSQL 14+ running locally or in your environment
- •A database user with read/write access
- •
psycopg2-binaryorpsycopginstalled - •
langchain,langchain-community, and your LLM provider package installed - •An API key for your model provider set in environment variables
- •A PostgreSQL schema ready for:
- •customer profiles
- •conversation memory
- •case notes or support tickets
- •audit logs
Install the core packages:
pip install langchain langchain-community psycopg2-binary sqlalchemy python-dotenv
Integration Steps
- •Create the PostgreSQL connection
Use SQLAlchemy for production code. It gives you pooling, clear connection strings, and better control than ad hoc connections.
import os
from sqlalchemy import create_engine, text
DATABASE_URL = os.getenv(
"DATABASE_URL",
"postgresql+psycopg2://bank_user:bank_pass@localhost:5432/retail_bank"
)
engine = create_engine(
DATABASE_URL,
pool_size=10,
max_overflow=20,
pool_pre_ping=True,
)
with engine.connect() as conn:
result = conn.execute(text("SELECT version();"))
print(result.scalar())
This is the base layer. Every LangChain-backed agent flow should read and write through this engine or a wrapper around it.
- •Create tables for agent memory and banking context
For retail banking agents, you need persistent state. Store chat history separately from customer records so you can control retention and access.
from sqlalchemy import MetaData, Table, Column, Integer, String, Text, DateTime, func
metadata = MetaData()
customer_profiles = Table(
"customer_profiles",
metadata,
Column("id", Integer, primary_key=True),
Column("customer_id", String(64), unique=True, nullable=False),
Column("full_name", String(255), nullable=False),
Column("segment", String(50), nullable=False),
Column("status", String(50), nullable=False),
)
conversation_logs = Table(
"conversation_logs",
metadata,
Column("id", Integer, primary_key=True),
Column("session_id", String(128), nullable=False),
Column("customer_id", String(64), nullable=False),
Column("role", String(32), nullable=False),
Column("content", Text, nullable=False),
Column("created_at", DateTime(timezone=True), server_default=func.now()),
)
metadata.create_all(engine)
This keeps the agent’s memory auditable. In banking systems, that matters more than clever prompts.
- •Wire PostgreSQL into LangChain as a tool
LangChain agents work best when they can call deterministic tools for database access. Use SQLDatabase from langchain_community.utilities to expose your Postgres schema safely.
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri(
DATABASE_URL,
include_tables=["customer_profiles", "conversation_logs"]
)
print(db.get_usable_table_names())
print(db.run("SELECT customer_id, segment FROM customer_profiles LIMIT 5;"))
If you’re building a retail banking assistant, this is how the agent answers grounded questions like:
- •“What segment is this customer in?”
- •“Show recent service interactions.”
- •“Did we already log a complaint about card replacement?”
- •Build a LangChain agent that queries Postgres
Use an LLM plus a SQL tool so the agent can translate natural language into controlled database queries. This is the pattern you want for production AI: model for reasoning, database for truth.
import os
from langchain_openai import ChatOpenAI
from langchain_community.agent_toolkits import create_sql_agent
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0,
api_key=os.getenv("OPENAI_API_KEY"),
)
agent_executor = create_sql_agent(
llm=llm,
db=db,
verbose=True,
)
response = agent_executor.invoke({
"input": "Find the segment for customer_id CUST-10021 and summarize any recent notes."
})
print(response["output"])
For banking use cases, keep temperature at 0 and restrict table access. Don’t give the model free rein over your entire database.
- •Persist conversation turns back into PostgreSQL
You want every interaction stored so compliance teams can review it later. A simple write path is enough to start; later you can add redaction and retention policies.
from sqlalchemy import insert
def log_message(session_id: str, customer_id: str, role: str, content: str):
stmt = insert(conversation_logs).values(
session_id=session_id,
customer_id=customer_id,
role=role,
content=content,
)
with engine.begin() as conn:
conn.execute(stmt)
log_message(
session_id="sess-9001",
customer_id="CUST-10021",
role="user",
content="What is my account status?"
)
That gives you an end-to-end loop:
- •user message comes in
- •LangChain reasons over approved data
- •response is generated
- •transcript is persisted in Postgres
Testing the Integration
Run a simple smoke test that inserts a record and asks the agent to retrieve it.
from sqlalchemy import insert
with engine.begin() as conn:
conn.execute(insert(customer_profiles).values(
customer_id="CUST-10021",
full_name="Amina Patel",
segment="Premier",
status="Active"
))
result = agent_executor.invoke({
"input": "What is the status and segment of customer CUST-10021?"
})
print(result["output"])
Expected output:
Customer CUST-10021 is Active and belongs to the Premier segment.
If that works, your LangChain-to-PostgreSQL path is wired correctly.
Real-World Use Cases
- •
Retail banking service assistant
- •Answer account-status questions from approved tables.
- •Summarize recent support cases before handing off to an agent.
- •
Complaint triage and case routing
- •Classify incoming complaints.
- •Store routing decisions and notes in PostgreSQL for auditability.
- •
Relationship manager copilot
- •Pull customer segments, recent interactions, and product holdings.
- •Generate meeting prep summaries from structured bank data.
The production pattern here is straightforward: keep Postgres as the source of truth, let LangChain orchestrate tool calls, and never let the model invent facts that should come from your database.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit