LangChain Tutorial (TypeScript): connecting to PostgreSQL for advanced developers
This tutorial shows how to wire LangChain TypeScript to PostgreSQL so your agent can read, write, and reason over structured data instead of just calling an LLM. You’d use this when your app needs durable state, auditability, SQL-backed retrieval, or agent workflows that depend on business data already living in Postgres.
What You'll Need
- •Node.js 18+
- •A PostgreSQL database you can connect to
- •A valid OpenAI API key
- •
langchain,@langchain/openai,@langchain/community,pg, andtypescript - •A
.envfile for secrets - •Basic familiarity with async/await and SQL
Install the packages:
npm install langchain @langchain/openai @langchain/community pg dotenv
npm install -D typescript tsx @types/node
Step-by-Step
- •Create a typed PostgreSQL connection pool and verify the database is reachable before you bring LangChain into the picture. In production, this is where you control timeouts, pooling, and credentials.
import "dotenv/config";
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: process.env.PGSSL === "true" ? { rejectUnauthorized: false } : undefined,
});
async function main() {
const result = await pool.query("SELECT NOW() AS now");
console.log("PostgreSQL connected at:", result.rows[0].now);
await pool.end();
}
main().catch(async (err) => {
console.error(err);
await pool.end();
process.exit(1);
});
- •Set up a small table for agent memory or business context. This example stores customer notes, but the same pattern works for case management, policy metadata, or conversation state.
import "dotenv/config";
import { Pool } from "pg";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
async function setup() {
await pool.query(`
CREATE TABLE IF NOT EXISTS customer_notes (
id SERIAL PRIMARY KEY,
customer_id TEXT NOT NULL,
note TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
`);
await pool.query(`
INSERT INTO customer_notes (customer_id, note)
VALUES ($1, $2)
ON CONFLICT DO NOTHING
`, ["cust_123", "Customer asked about premium increase and wants callback next week."]);
console.log("Table ready.");
await pool.end();
}
setup().catch(async (err) => {
console.error(err);
await pool.end();
process.exit(1);
});
- •Use LangChain’s PostgreSQL vector store when you need semantic retrieval over rows or documents stored in Postgres. This version assumes you’ve already created embeddings-ready content; it’s the cleanest path for “find relevant records” workflows.
import "dotenv/config";
import { Pool } from "pg";
import { OpenAIEmbeddings } from "@langchain/openai";
import { PGVectorStore } from "@langchain/community/vectorstores/pgvector";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
async function main() {
const embeddings = new OpenAIEmbeddings({
apiKey: process.env.OPENAI_API_KEY,
model: "text-embedding-3-small",
});
const store = await PGVectorStore.initialize(embeddings, {
postgresConnectionOptions: {
connectionString: process.env.DATABASE_URL!,
},
tableName: "customer_notes_vectors",
columns: {
idColumnName: "id",
vectorColumnName: "embedding",
contentColumnName: "content",
metadataColumnName: "metadata",
},
});
const docs = await store.similaritySearch("premium increase callback", 3);
console.log(docs.map((d) => d.pageContent));
await pool.end();
}
main().catch(async (err) => {
console.error(err);
await pool.end();
process.exit(1);
});
- •If your use case is structured querying rather than embeddings, connect LangChain to SQL directly and let the model generate safe queries against your schema. This is the right approach for analytics assistants, claims lookup tools, and ops copilots.
import "dotenv/config";
import { DataSource } from "typeorm";
import { SqlDatabase } from "@langchain/community/sql_db";
import { ChatOpenAI } from "@langchain/openai";
import { createSqlAgent } from "@langchain/community/agents/toolkits/sql";
async function main() {
const datasource = new DataSource({
type: "postgres",
url: process.env.DATABASE_URL,
synchronize: false,
logging: false,
entities: [],
});
await datasource.initialize();
const db = await SqlDatabase.fromDataSourceParams({
appDataSource: datasource,
});
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
modelName: "gpt-4o-mini",
temperature: 0,
});
const agent = createSqlAgent(llm, db);
const result = await agent.invoke({
input: "How many rows are in customer_notes?",
});
console.log(result.output);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •For production systems, wrap reads and writes in explicit functions instead of letting the agent hit raw tables directly. That gives you validation, logging, and a place to enforce row-level rules.
import "dotenv/config";
import { Pool } from "pg";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export async function addCustomerNote(customerId: string, note: string) {
if (!customerId || !note) throw new Error("customerId and note are required");
const query = `
INSERT INTO customer_notes (customer_id, note)
VALUES ($1, $2)
RETURNING id, customer_id, note, created_at
`;
const result = await pool.query(query, [customerId, note]);
return result.rows[0];
}
Testing It
Run each script with npx tsx <file>.ts after setting DATABASE_URL and OPENAI_API_KEY in .env. First confirm the connection test prints a timestamp from PostgreSQL. Then confirm the table setup script inserts data successfully.
For the vector search step, make sure your table name matches what you configured and that embeddings can be written to Postgres. For the SQL agent step, ask a simple count query first; if that works consistently, move on to more complex questions like filtering by date or grouping by status.
Next Steps
- •Add row-level security and role-based access before exposing any SQL tool to an LLM
- •Store conversation state in Postgres with LangGraph if you need durable multi-step agents
- •Add pgvector indexes and ingestion pipelines for higher-volume semantic search
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit