CrewAI Tutorial (Python): connecting to PostgreSQL for intermediate developers
This tutorial shows you how to wire a CrewAI Python agent to PostgreSQL, persist structured outputs, and query them back for downstream workflows. You need this when your agent is producing business data that should live in a real database instead of disappearing after the run.
What You'll Need
- •Python 3.10+
- •A PostgreSQL instance you can connect to
- •A PostgreSQL user with
CREATE TABLE,INSERT, andSELECTpermissions - •OpenAI API key set as
OPENAI_API_KEY - •CrewAI installed:
crewai - •PostgreSQL driver installed:
psycopg2-binary - •Optional but useful:
python-dotenvfor local environment variables - •A
.envfile or shell exports for:- •
OPENAI_API_KEY - •
DATABASE_URL
- •
Step-by-Step
- •Install the dependencies and confirm your environment variables are available.
For production work, I prefer using a single DSN string for PostgreSQL because it keeps local dev and deployed configs aligned.
pip install crewai psycopg2-binary python-dotenv
export OPENAI_API_KEY="your-openai-key"
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/crewai_demo"
- •Create a small PostgreSQL table for storing agent results.
Keep the schema simple at first: one row per task output, with timestamps for auditability.
import os
import psycopg2
DATABASE_URL = os.environ["DATABASE_URL"]
ddl = """
CREATE TABLE IF NOT EXISTS crewai_results (
id SERIAL PRIMARY KEY,
task_name TEXT NOT NULL,
result ტექXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
"""
with psycopg2.connect(DATABASE_URL) as conn:
with conn.cursor() as cur:
cur.execute(ddl)
conn.commit()
print("Table ready")
- •Define a CrewAI agent and task that produces structured output.
The important part here is not just getting text back, but getting predictable content you can store and query later.
from crewai import Agent, Task, Crew, Process
from crewai.llm import LLM
llm = LLM(model="gpt-4o-mini")
analyst = Agent(
role="Data Analyst",
goal="Summarize customer churn risk in one paragraph",
backstory="You write concise operational summaries for internal teams.",
llm=llm,
)
task = Task(
description=(
"Write a short churn-risk summary for a banking product team. "
"Return plain text only."
),
expected_output="A concise churn-risk summary",
agent=analyst,
)
crew = Crew(
agents=[analyst],
tasks=[task],
process=Process.sequential,
)
result = crew.kickoff()
print(result)
- •Persist the CrewAI result into PostgreSQL.
This is the pattern you want in real systems: run the agent, validate the output shape, then write it to the database in one transaction.
import os
import psycopg2
DATABASE_URL = os.environ["DATABASE_URL"]
task_name = "churn_summary"
output_text = str(result).strip()
insert_sql = """
INSERT INTO crewai_results (task_name, result)
VALUES (%s, %s)
RETURNING id, created_at;
"""
with psycopg2.connect(DATABASE_URL) as conn:
with conn.cursor() as cur:
cur.execute(insert_sql, (task_name, output_text))
row_id, created_at = cur.fetchone()
conn.commit()
print(f"Saved row {row_id} at {created_at}")
- •Read the stored rows back and use them in another workflow step.
Once the data is in PostgreSQL, you can build reporting jobs, dashboards, or follow-up agents that inspect historical outputs.
import os
import psycopg2
DATABASE_URL = os.environ["DATABASE_URL"]
select_sql = """
SELECT id, task_name, result, created_at
FROM crewai_results
ORDER BY id DESC
LIMIT 5;
"""
with psycopg2.connect(DATABASE_URL) as conn:
with conn.cursor() as cur:
cur.execute(select_sql)
rows = cur.fetchall()
for row in rows:
print(row)
Testing It
Run the script end to end and confirm you see three things: the CrewAI output printed to stdout, a successful insert message with an ID, and at least one row returned from the final query. If the insert works but the query returns nothing, check that both snippets are pointing at the same DATABASE_URL.
If you get a connection error, verify PostgreSQL is running and that your user has access to the target database. If CrewAI fails before the insert step, inspect your OpenAI key and model name first; database code is usually not the problem when the agent never produces output.
For a cleaner test loop, run the DDL once, then rerun only the agent plus insert code several times and confirm multiple rows accumulate in order.
Next Steps
- •Add JSONB columns so agents can store structured payloads instead of plain text.
- •Wrap DB writes in retry logic for transient connection failures.
- •Build a second CrewAI task that reads recent PostgreSQL rows and generates an operational summary.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit