How to Integrate LangChain for banking with PostgreSQL for startups
Combining LangChain for banking with PostgreSQL gives you a practical way to build AI agents that can answer customer questions, pull account-like records, and persist conversation state or audit logs. For startups, that means you can move from brittle chat demos to systems that actually remember context, query structured data, and stay inspectable.
The pattern is simple: LangChain handles orchestration and tool use, while PostgreSQL stores the business data, chat history, and retrieval indexes your agent needs.
Prerequisites
- •Python 3.10+
- •A running PostgreSQL instance
- •A database user with read/write access
- •
psycopg2orpsycopginstalled - •LangChain installed with your banking-related chain/tool packages
- •An API key for your LLM provider if your agent uses one
- •A
.envfile or secret manager for credentials
Install the core packages:
pip install langchain langchain-community langchain-openai psycopg2-binary sqlalchemy python-dotenv
Integration Steps
- •
Set up PostgreSQL connection settings
Start by defining a clean connection string. In production, keep credentials in environment variables and never hardcode them.
import os
from dotenv import load_dotenv
load_dotenv()
POSTGRES_URI = os.getenv(
"POSTGRES_URI",
"postgresql+psycopg2://bank_user:bank_pass@localhost:5432/banking_ai"
)
- •
Create the PostgreSQL schema for agent state
Store conversations, tool outputs, and banking events in separate tables. This keeps your audit trail clean and makes debugging much easier.
from sqlalchemy import create_engine, text
engine = create_engine(POSTGRES_URI)
schema_sql = """
CREATE TABLE IF NOT EXISTS agent_messages (
id SERIAL PRIMARY KEY,
session_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS banking_events (
id SERIAL PRIMARY KEY,
customer_id TEXT NOT NULL,
event_type TEXT NOT NULL,
payload JSONB NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
"""
with engine.begin() as conn:
conn.execute(text(schema_sql))
- •
Connect LangChain to PostgreSQL-backed memory
For startup-grade agents, you want persistent memory. LangChain’s message history integration works well when backed by Postgres.
from langchain_community.chat_message_histories import SQLChatMessageHistory
history = SQLChatMessageHistory(
session_id="customer_123",
connection_string=POSTGRES_URI
)
history.add_user_message("What is my current balance?")
history.add_ai_message("I can help with that. Let me check your records.")
- •
Wire a banking tool into the LangChain agent
In a real banking workflow, your “banking” layer is usually a service or internal API. LangChain exposes tools via
ToolorStructuredTool, which lets the agent call deterministic functions instead of hallucinating answers.
import json
from langchain_core.tools import tool
@tool
def get_customer_balance(customer_id: str) -> str:
"""Fetch the latest available balance for a customer."""
# Replace this with a real banking API call.
result = {
"customer_id": customer_id,
"currency": "USD",
"available_balance": 4820.75
}
return json.dumps(result)
print(get_customer_balance.invoke({"customer_id": "customer_123"}))
- •
Build the agent and persist its outputs to PostgreSQL
Use LangChain’s chat model interface and store both user input and assistant output in Postgres so every interaction is traceable.
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_customer_balance]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
user_input = "Check balance for customer_123 and summarize it."
response = agent.invoke({"input": user_input})
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO agent_messages (session_id, role, content)
VALUES (:session_id, :role, :content)
"""),
{"session_id": "customer_123", "role": "user", "content": user_input}
)
conn.execute(
text("""
INSERT INTO agent_messages (session_id, role, content)
VALUES (:session_id, :role, :content)
"""),
{"session_id": "customer_123", "role": "assistant", "content": response["output"]}
)
print(response["output"])
Testing the Integration
Run a simple query against the database after invoking the agent. You want to confirm two things: the tool executed and the transcript was saved.
from sqlalchemy import text
with engine.connect() as conn:
rows = conn.execute(
text("""
SELECT role, content
FROM agent_messages
WHERE session_id = :session_id
ORDER BY created_at ASC
"""),
{"session_id": "customer_123"}
).fetchall()
for row in rows:
print(f"{row.role}: {row.content}")
Expected output:
user: Check balance for customer_123 and summarize it.
assistant: The available balance for customer_123 is USD 4,820.75.
If you also want to verify persistence directly:
with engine.connect() as conn:
count = conn.execute(
text("SELECT COUNT(*) FROM agent_messages WHERE session_id = :session_id"),
{"session_id": "customer_123"}
).scalar_one()
print(count)
Expected output:
2
Real-World Use Cases
- •
Customer support agents
- •Answer balance-related questions, recent transaction summaries, or account status checks using a controlled tool layer.
- •Persist every interaction in PostgreSQL for compliance review.
- •
Internal ops copilots
- •Let operations teams query onboarding status, failed payment events, or KYC review queues through natural language.
- •Store tool calls and outcomes in Postgres so support can replay what happened.
- •
Personal finance assistants
- •Build an assistant that reads structured financial records from Postgres and uses LangChain to explain them in plain English.
- •Add retrieval over transaction notes or policy docs later without changing the core architecture.
The main rule here is discipline: keep banking logic outside the model, keep state in PostgreSQL, and let LangChain orchestrate only what it should call next. That gives startups an AI system that is debuggable on Monday morning instead of mysterious by Friday afternoon.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit