LangChain Tutorial (Python): connecting to PostgreSQL for intermediate developers
This tutorial shows you how to connect a LangChain Python app to PostgreSQL, store conversation state in the database, and query it back in a way that’s usable in production. You’d do this when you want chat history, agent memory, or workflow state to survive restarts and be shared across multiple workers.
What You'll Need
- •Python 3.10+
- •A running PostgreSQL instance
- •A PostgreSQL user with permission to create tables
- •An OpenAI API key
- •These Python packages:
- •
langchain - •
langchain-openai - •
langchain-postgres - •
psycopg[binary] - •
python-dotenv
- •
Install them with:
pip install langchain langchain-openai langchain-postgres psycopg[binary] python-dotenv
Set your environment variables:
export OPENAI_API_KEY="your-openai-api-key"
export POSTGRES_URL="postgresql+psycopg://postgres:postgres@localhost:5432/postgres"
Step-by-Step
- •Start by creating a small script that loads your environment variables and initializes the chat model. The important part here is keeping the database connection string in one place so you can move between local and production without changing code.
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
load_dotenv()
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
POSTGRES_URL = os.environ["POSTGRES_URL"]
llm = ChatOpenAI(model="gpt-4o-mini", api_key=OPENAI_API_KEY)
print("LLM ready")
- •Next, create a PostgreSQL-backed chat history object for a specific session. LangChain stores each message in Postgres, so every session ID becomes a durable conversation thread.
from langchain_postgres import PostgresChatMessageHistory
session_id = "customer-123"
history = PostgresChatMessageHistory(
connection_string=POSTGRES_URL,
session_id=session_id,
)
print(f"Loaded history for session: {session_id}")
- •Now add messages to the history and confirm they were written. This is the simplest way to validate the database layer before wiring it into an agent or chain.
from langchain_core.messages import HumanMessage, AIMessage
history.add_messages([
HumanMessage(content="I need help resetting my policy portal password."),
AIMessage(content="I can help with that. Have you tried the reset link on the login page?"),
])
messages = history.messages
for msg in messages:
print(type(msg).__name__, "=>", msg.content)
- •Wrap the model call so it uses the stored history as context. For intermediate apps, this is usually where you connect persistent memory to a chatbot endpoint or background worker.
from langchain_core.messages import SystemMessage
system_prompt = SystemMessage(
content="You are a helpful support assistant for an insurance portal."
)
conversation = [system_prompt] + history.messages + [
HumanMessage(content="The reset link expired. What now?")
]
response = llm.invoke(conversation)
print(response.content)
history.add_messages([HumanMessage(content="The reset link expired. What now?"), response])
- •If you want a clean setup for multiple environments, create the table schema once and reuse it. This avoids runtime surprises when your app starts on a fresh database.
from langchain_postgres import PostgresChatMessageHistory
PostgresChatMessageHistory.create_tables(connection_string=POSTGRES_URL)
history = PostgresChatMessageHistory(
connection_string=POSTGRES_URL,
session_id="customer-456",
)
history.add_user_message("Can I update my address online?")
print(history.messages[-1].content)
Testing It
Run the script twice with the same session_id. On the second run, you should see the previous messages loaded from PostgreSQL instead of starting with an empty conversation.
If you want to verify persistence at the database level, connect with psql or your SQL client and inspect the tables created by langchain_postgres. You should see rows for each stored message tied to the same session ID.
A good production test is to restart your app container or process and confirm that history.messages still returns prior turns. That tells you your memory is durable and not just living in Python process state.
Next Steps
- •Wire
PostgresChatMessageHistoryintoRunnableWithMessageHistoryso your chain automatically loads and saves messages per request. - •Add row-level retention policies so old sessions expire cleanly instead of growing forever.
- •Move from plain chat history to structured state using PostgreSQL tables for tool results, audit events, or agent checkpoints.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit