CrewAI Tutorial (TypeScript): connecting to PostgreSQL for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaiconnecting-to-postgresql-for-advanced-developerstypescript

This tutorial shows you how to wire a CrewAI TypeScript agent to PostgreSQL so it can read, write, and reason over structured data instead of working from prompts alone. You need this when your agent has to pull customer records, persist task results, or act on operational data in a real system.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project initialized with tsconfig.json
  • PostgreSQL 14+ running locally or in Docker
  • A database user with read/write access to the target schema
  • An OpenAI API key for the CrewAI agent model
  • Packages:
    • crewai
    • pg
    • dotenv
    • typescript
    • tsx or ts-node for running TypeScript directly

Step-by-Step

  1. Set up the project and install dependencies. Keep the runtime simple: one agent, one database client, one tool wrapper.
mkdir crewai-postgres-demo
cd crewai-postgres-demo
npm init -y
npm install crewai pg dotenv
npm install -D typescript tsx @types/node @types/pg
npx tsc --init
  1. Create your environment file and a minimal schema. The agent needs a real table to query, not a mocked response hidden behind a tool.
OPENAI_API_KEY=your_openai_key_here
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/crewai_demo
CREATE TABLE IF NOT EXISTS customer_notes (
  id SERIAL PRIMARY KEY,
  customer_id TEXT NOT NULL,
  note TEXT NOT NULL,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
  1. Add a PostgreSQL helper that exposes safe query functions. In production, keep SQL in one place and avoid letting the agent generate arbitrary statements without guardrails.
// db.ts
import { Pool } from "pg";
import "dotenv/config";

export const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

export async function query<T = any>(text: string, params: any[] = []) {
  const result = await pool.query<T>(text, params);
  return result.rows;
}

export async function closeDb() {
  await pool.end();
}
  1. Wrap PostgreSQL access as a CrewAI tool. This keeps the agent interface clean and gives you a single control point for validation, logging, and permissions.
// tools.ts
import { Tool } from "crewai";
import { query } from "./db";

export const fetchCustomerNotes = new Tool({
  name: "fetch_customer_notes",
  description: "Fetch notes for a given customer_id from PostgreSQL.",
  func: async (customerId: string) => {
    const rows = await query(
      "SELECT id, customer_id, note, created_at FROM customer_notes WHERE customer_id = $1 ORDER BY created_at DESC LIMIT 10",
      [customerId]
    );
    return JSON.stringify(rows);
  },
});
  1. Build the agent and give it an explicit job. The important part is that the model can use the database tool, but it still needs clear instructions about what it is allowed to do.
// index.ts
import "dotenv/config";
import { Agent } from "crewai";
import { fetchCustomerNotes } from "./tools";
import { closeDb } from "./db";

async function main() {
  const agent = new Agent({
    role: "Support Data Assistant",
    goal: "Retrieve relevant customer notes from PostgreSQL and summarize them accurately.",
    backstory: "You help support teams inspect customer history stored in PostgreSQL.",
    tools: [fetchCustomerNotes],
    verbose: true,
    llm: "gpt-4o-mini",
  });

  const result = await agent.run(
    "Find the latest notes for customer_id CUST-1001 and summarize the main issue."
  );

  console.log(result);
  await closeDb();
}

main().catch(async (err) => {
  console.error(err);
  await closeDb();
  process.exit(1);
});
  1. Insert test data and run the script. This gives you an end-to-end check that proves CrewAI can call PostgreSQL through your tool layer.
INSERT INTO customer_notes (customer_id, note)
VALUES
('CUST-1001', 'Customer reported duplicate premium charge on invoice #4481'),
('CUST-1001', 'Billing team confirmed refund was initiated'),
('CUST-2002', 'Policy document upload failed due to invalid PDF encoding');
npx tsx index.ts

Testing It

First, confirm your script prints verbose agent output showing that the tool was called before the final answer is produced. Then verify the summary references only rows returned from PostgreSQL and not invented details.

If you want stronger validation, change the customer_id in the prompt and confirm the output changes accordingly. Also test an empty result set so you can see how your agent behaves when no matching records exist.

For production work, add logging around each SQL call and assert that only parameterized queries are allowed. That is where most bad integrations fail.

Next Steps

  • Add a second tool for writing audit events back into PostgreSQL after each agent action.
  • Put row-level security on sensitive tables before exposing them to any AI workflow.
  • Extend this pattern with a retrieval layer that combines PostgreSQL metadata with vector search for richer case handling.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides