LangChain Tutorial (TypeScript): connecting to PostgreSQL for intermediate developers
By Cyprian AaronsUpdated 2026-04-21
langchainconnecting-to-postgresql-for-intermediate-developerstypescript
This tutorial shows you how to wire LangChain in TypeScript to PostgreSQL, store chat memory in a real database, and query it back through a working agent setup. You need this when your app has to survive restarts, support multiple users, and keep conversation state out of local memory.
What You'll Need
- •Node.js 18+
- •A PostgreSQL instance you can connect to locally or remotely
- •A PostgreSQL database and user with write access
- •An OpenAI API key
- •These packages:
- •
langchain - •
@langchain/openai - •
pg - •
dotenv - •
typescript - •
ts-nodeortsx
- •
Install them like this:
npm install langchain @langchain/openai pg dotenv
npm install -D typescript tsx @types/node
Create a .env file:
OPENAI_API_KEY=your_key_here
POSTGRES_CONNECTION_STRING=postgresql://postgres:password@localhost:5432/langchain_demo
Step-by-Step
- •Set up a small TypeScript project and load environment variables. This keeps your connection string and API key out of source code, which matters the moment you move beyond local testing.
import "dotenv/config";
const requiredEnv = ["OPENAI_API_KEY", "POSTGRES_CONNECTION_STRING"] as const;
for (const key of requiredEnv) {
if (!process.env[key]) {
throw new Error(`Missing required env var: ${key}`);
}
}
console.log("Environment loaded.");
- •Create the PostgreSQL table that will store chat history. LangChain’s Postgres message history expects a table with an id, session id, message payload, and timestamp columns.
import { Client } from "pg";
import "dotenv/config";
async function main() {
const client = new Client({
connectionString: process.env.POSTGRES_CONNECTION_STRING,
});
await client.connect();
await client.query(`
CREATE TABLE IF NOT EXISTS chat_history (
id SERIAL PRIMARY KEY,
session_id TEXT NOT NULL,
message JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
`);
console.log("Table ready.");
await client.end();
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •Connect LangChain’s message history to PostgreSQL. The important part here is that each user session gets its own history key, so you can isolate conversations per customer, account, or support case.
import "dotenv/config";
import { PostgresChatMessageHistory } from "@langchain/community/stores/message/postgres";
async function main() {
const history = new PostgresChatMessageHistory({
tableName: "chat_history",
sessionId: "customer_123",
config: {
connectionString: process.env.POSTGRES_CONNECTION_STRING!,
},
});
await history.addUserMessage("Hello, I need help with my policy.");
await history.addAIMessage("Sure. What is your policy number?");
const messages = await history.getMessages();
console.log(messages.map((m) => `${m._getType()}: ${m.content}`));
await history.close();
}
main().catch(console.error);
- •Wrap your model with message history so LangChain automatically reads and writes the conversation. This is the production pattern you want: the chain stays stateless, while PostgreSQL owns persistence.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { PostgresChatMessageHistory } from "@langchain/community/stores/message/postgres";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful insurance assistant."],
new MessagesPlaceholder("history"),
["human", "{input}"],
]);
const chain = prompt.pipe(model);
const runnable = new RunnableWithMessageHistory({
runnable: chain,
getMessageHistory: async (sessionId) =>
new PostgresChatMessageHistory({
tableName: "chat_history",
sessionId,
config: {
connectionString: process.env.POSTGRES_CONNECTION_STRING!,
},
}),
inputMessagesKey: "input",
historyMessagesKey: "history",
});
async function main() {
const result = await runnable.invoke(
{ input: "My premium increased. Why?" },
{ configurable: { sessionId: "customer_123" } }
);
console.log(result.content);
}
main().catch(console.error);
- •Run two turns against the same session ID and confirm the model remembers context. If the second response references the first turn without you manually passing prior messages, the PostgreSQL-backed memory is working.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { PostgresChatMessageHistory } from "@langchain/community/stores/message/postgres";
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a concise support agent."],
new MessagesPlaceholder("history"),
["human", "{input}"],
]);
const chain = prompt.pipe(model);
const runnable = new RunnableWithMessageHistory({
runnable: chain,
getMessageHistory: async (sessionId) =>
new PostgresChatMessageHistory({
tableName: "chat_history",
sessionId,
config: { connectionString: process.env.POSTGRES_CONNECTION_STRING! },
}),
});
async function main() {
const config = { configurable: { sessionId: "customer_456" } };
const first = await runnable.invoke({ input: "I filed a claim yesterday." }, config);
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit