AutoGen Tutorial (TypeScript): connecting to PostgreSQL for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogenconnecting-to-postgresql-for-advanced-developerstypescript

This tutorial shows how to wire an AutoGen TypeScript agent to PostgreSQL so the agent can read, write, and reason over real relational data. You need this when your agent must answer questions from operational data, persist conversation state, or run controlled database actions instead of pretending everything lives in memory.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • A PostgreSQL instance you can connect to
  • An OpenAI API key
  • These packages:
    • @autogenai/autogen
    • pg
    • dotenv
    • zod
  • A .env file with:
    • OPENAI_API_KEY
    • DATABASE_URL

Step-by-Step

  1. Install dependencies and set up your project. Keep the environment clean and make sure TypeScript can run ES module imports without extra friction.
mkdir autogen-pg-tutorial
cd autogen-pg-tutorial
npm init -y
npm i @autogenai/autogen pg dotenv zod
npm i -D typescript tsx @types/node @types/pg
npx tsc --init --rootDir src --outDir dist --module nodenext --moduleResolution nodenext --target es2022 --esModuleInterop true
mkdir src
  1. Add your environment variables and create a small PostgreSQL table. For production work, keep schema changes explicit and idempotent.
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_api_key_here
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/autogen_demo
EOF

cat > src/init-db.ts << 'EOF'
import 'dotenv/config';
import { Client } from 'pg';

const client = new Client({ connectionString: process.env.DATABASE_URL });

async function main() {
  await client.connect();
  await client.query(`
    CREATE TABLE IF NOT EXISTS support_notes (
      id SERIAL PRIMARY KEY,
      customer_id TEXT NOT NULL,
      note TEXT NOT NULL,
      created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    )
  `);
  await client.end();
}

main().catch(async (err) => {
  console.error(err);
  process.exit(1);
});
EOF

npx tsx src/init-db.ts
  1. Build a thin PostgreSQL repository layer. This keeps SQL out of your agent wiring and gives you one place to enforce parameterized queries.
import 'dotenv/config';
import { Client } from 'pg';

export type SupportNote = {
  id: number;
  customer_id: string;
  note: string;
  created_at: string;
};

const client = new Client({ connectionString: process.env.DATABASE_URL });

export async function dbConnect() {
  if (client._connected) return;
  await client.connect();
}

export async function addSupportNote(customerId: string, note: string) {
  await dbConnect();
  const result = await client.query<SupportNote>(
    `INSERT INTO support_notes (customer_id, note)
     VALUES ($1, $2)
     RETURNING id, customer_id, note, created_at`,
    [customerId, note],
  );
  return result.rows[0];
}

export async function listSupportNotes(customerId: string) {
  await dbConnect();
  const result = await client.query<SupportNote>(
    `SELECT id, customer_id, note, created_at
     FROM support_notes
     WHERE customer_id = $1
     ORDER BY created_at DESC`,
    [customerId],
  );
  return result.rows;
}
  1. Expose database actions as AutoGen tools. This is the part that matters: the agent should call narrow tools, not receive raw SQL access.
import { z } from 'zod';
import { tool } from '@autogenai/autogen';
import { addSupportNote, listSupportNotes } from './db.js';

export const createSupportNoteTool = tool({
  name: 'create_support_note',
  description: 'Insert a support note for a customer into PostgreSQL.',
  parameters: z.object({
    customerId: z.string().min(1),
    note: z.string().min(1),
  }),
  execute: async ({ customerId, note }) => {
    const row = await addSupportNote(customerId, note);
    return JSON.stringify(row);
  },
});

export const listSupportNotesTool = tool({
  name: 'list_support_notes',
  description: 'List support notes for a specific customer from PostgreSQL.',
  parameters: z.object({
    customerId: z.string().min(1),
  }),
  execute: async ({ customerId }) => {
    const rows = await listSupportNotes(customerId);
    return JSON.stringify(rows);
  },
});
  1. Create the AutoGen agent and attach the tools. Use a model config that matches your OpenAI account and keep the system prompt strict about when to use the database.
import 'dotenv/config';
import { AssistantAgent } from '@autogenai/autogen';
import { createSupportNoteTool, listSupportNotesTool } from './tools.js';

const agent = new AssistantAgent({
  name: 'postgres-support-agent',
  modelClientOptions: {
    model: 'gpt-4o-mini',
    apiKey: process.env.OPENAI_API_KEY!,
  },
});

agent.registerTools([createSupportNoteTool, listSupportNotesTool]);

async function main() {
  const result = await agent.run({
    task:
      'Add a support note for customer CUST-1001 saying "Called back about invoice discrepancy". Then list all notes for CUST-1001.',
    systemMessage:
      'You are a support operations assistant. Use database tools only for persistence and retrieval. Never invent stored notes.',
  });

  console.log(result.output);
}

main().catch((err) => {
await Promise.reject(err);
});
  1. Run the agent and inspect both the terminal output and the database rows. If the tool wiring is correct, you should see the inserted record returned by PostgreSQL and then included in the final answer.
npx tsx src/agent.ts
psql "$DATABASE_URL" -c "SELECT * FROM support_notes ORDER BY created_at DESC;"

Testing It

Start by inserting one known note through the agent and confirm it appears in support_notes. Then ask the agent to list notes for that same customerId and verify it returns only rows from PostgreSQL, not a fabricated summary.

If you want a stricter test, change the prompt to request a different customer ID that has no rows. The correct behavior is an empty result set or a clear “no notes found” response generated from actual query output.

For production validation, test concurrent writes with two terminal sessions hitting the same customer ID. That catches transaction issues early and confirms your parameterized queries are safe under load.

Next Steps

  • Add transaction handling for multi-step workflows like “create ticket + write audit log”
  • Wrap reads in repository methods with explicit pagination and filtering
  • Move from simple tools to guarded workflows where the agent can propose SQL but never execute arbitrary statements

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides