LlamaIndex Tutorial (TypeScript): connecting to PostgreSQL for beginners
This tutorial shows you how to connect a TypeScript app using LlamaIndex to PostgreSQL, store and retrieve data, and run a simple query pipeline against real database content. You need this when your application already keeps source data in Postgres and you want LlamaIndex to read from it without moving everything into a separate vector store.
What You'll Need
- •Node.js 18+ and npm
- •A PostgreSQL database you can connect to
- •A
DATABASE_URLlikepostgresql://user:password@localhost:5432/mydb - •An OpenAI API key if you want to use an LLM for querying
- •These packages:
- •
llamaindex - •
pg - •
dotenv - •
typescript - •
ts-nodeortsx
- •
- •A basic table in PostgreSQL with text content to index
Step-by-Step
- •Install the dependencies and set up your environment variables.
Keep the connection string in.env; don’t hardcode credentials in your source.
npm init -y
npm install llamaindex pg dotenv
npm install -D typescript tsx @types/node @types/pg
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres
OPENAI_API_KEY=your_openai_key_here
- •Create a simple PostgreSQL table and seed it with a few rows.
This example uses a plain text column so you can see the end-to-end flow without extra schema complexity.
CREATE TABLE IF NOT EXISTS support_articles (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
body TEXT NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
INSERT INTO support_articles (title, body) VALUES
('Resetting passwords', 'Users can reset passwords from the login page by clicking Forgot password.'),
('Card replacement', 'To replace a lost card, verify identity and issue a replacement in the admin portal.'),
('Wire transfer limits', 'Daily wire transfer limits depend on account tier and compliance settings.');
- •Write a small TypeScript script that reads rows from PostgreSQL and turns them into LlamaIndex documents.
This is the simplest production-friendly pattern: let Postgres remain the source of truth, then load only the fields you need into documents.
import "dotenv/config";
import pg from "pg";
import { Document } from "llamaindex";
const { Client } = pg;
async function loadDocuments() {
const client = new Client({
connectionString: process.env.DATABASE_URL,
});
await client.connect();
const result = await client.query<{
id: number;
title: string;
body: string;
}>("SELECT id, title, body FROM support_articles ORDER BY id ASC");
await client.end();
return result.rows.map(
(row) =>
new Document({
text: `${row.title}\n\n${row.body}`,
metadata: { id: row.id, title: row.title },
})
);
}
loadDocuments().then((docs) => {
console.log(`Loaded ${docs.length} documents`);
});
- •Build an index over those documents and query it with an OpenAI-backed engine.
In LlamaIndex, this gives you semantic retrieval over your PostgreSQL content without changing the database schema.
import "dotenv/config";
import pg from "pg";
import {
Document,
VectorStoreIndex,
} from "llamaindex";
const { Client } = pg;
async function main() {
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();
const result = await client.query<{ id: number; title: string; body: string }>(
"SELECT id, title, body FROM support_articles ORDER BY id ASC"
);
await client.end();
const documents = result.rows.map(
(row) =>
new Document({
text: `${row.title}\n\n${row.body}`,
metadata: { id: row.id, title: row.title },
})
);
const index = await VectorStoreIndex.fromDocuments(documents);
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query({
query: "How do I replace a lost card?",
});
console.log(String(response));
}
main();
- •If you want persistence across runs, store vectors in PostgreSQL instead of rebuilding every time.
For beginners, this is the point where many teams move from demo code to something they can actually deploy.
import "dotenv/config";
import {
Document,
StorageContext,
VectorStoreIndex,
} from "llamaindex";
import { PGVectorStore } from "@llamaindex/postgres";
async function buildPersistentIndex() {
const vectorStore = await PGVectorStore.fromParams({
connectionString: process.env.DATABASE_URL!,
tableName: "llamaindex_vectors",
dimensions: 1536,
hybridSearch: false,
});
const storageContext = await StorageContext.fromDefaults({
vectorStore,
});
const docs = [
new Document({ text: "PostgreSQL stores transaction records.", metadata: { type: "finance" } }),
new Document({ text: "Insurance claims can be tracked by claim ID.", metadata: { type: "claims" } }),
];
const index = await VectorStoreIndex.fromDocuments(docs, {
storageContext,
});
console.log("Persistent index built");
}
buildPersistentIndex();
Testing It
Run your SQL seed script first, then execute the TypeScript file with npx tsx your-file.ts. If the connection is correct, you should see either the number of loaded documents or a natural-language answer returned by the query engine.
If you get a connection error, check DATABASE_URL, database name, port, username, and password first. If queries work but results look poor, make sure each document has enough context in its text field; one-line rows are usually not enough for good retrieval.
For persistent storage, confirm that the llamaindex_vectors table exists after indexing. Then rerun the script and verify that it reads from the same database without failing on missing tables.
Next Steps
- •Add metadata filters so queries can target specific business domains like claims, payments, or KYC
- •Swap in PostgreSQL-backed vector storage for production persistence and faster reloads
- •Add an API route around this pipeline so your app can answer questions directly from Postgres
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit