LangGraph Tutorial (TypeScript): connecting to PostgreSQL for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphconnecting-to-postgresql-for-advanced-developerstypescript

This tutorial shows how to wire a LangGraph TypeScript agent to PostgreSQL so your graph can persist state, recover after restarts, and support multi-step workflows with durable memory. You need this when a stateless in-memory graph is not enough and you want thread history, checkpoints, or agent state that survives process crashes.

What You'll Need

  • Node.js 18+ and npm
  • A PostgreSQL instance you can connect to locally or in the cloud
  • OPENAI_API_KEY
  • DATABASE_URL for Postgres, for example:
    • postgresql://postgres:postgres@localhost:5432/langgraph
  • These packages:
    • @langchain/langgraph
    • @langchain/openai
    • pg
    • typescript
    • tsx

Step-by-Step

  1. Install the dependencies and create a TypeScript project.
    Keep this lean: LangGraph, OpenAI, and the Postgres driver are all you need for a durable graph with checkpointing.
npm init -y
npm install @langchain/langgraph @langchain/openai pg
npm install -D typescript tsx @types/node
npx tsc --init
  1. Set up your environment variables.
    Use a .env file if you want, but for clarity here’s the shape of the values your app needs at runtime.
export const config = {
  OPENAI_API_KEY: process.env.OPENAI_API_KEY ?? "",
  DATABASE_URL:
    process.env.DATABASE_URL ??
    "postgresql://postgres:postgres@localhost:5432/langgraph",
};

if (!config.OPENAI_API_KEY) {
  throw new Error("Missing OPENAI_API_KEY");
}
  1. Create a PostgreSQL-backed checkpointer and a simple graph.
    The important part is the PostgresSaver: it stores checkpoints in Postgres so each thread can resume from previous state instead of starting over.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, StateGraph } from "@langchain/langgraph";
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";
import { Pool } from "pg";

const State = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
  model: "gpt-4o-mini",
});

async function main() {
  const pool = new Pool({ connectionString: process.env.DATABASE_URL });
  const checkpointer = await PostgresSaver.fromPool(pool);
  await checkpointer.setup();

  const graph = new StateGraph(State)
    .addNode("assistant", async (state) => {
      const response = await model.invoke(state.messages);
      return { messages: [response] };
    })
    .addEdge("__start__", "assistant")
    .addEdge("assistant", "__end__")
    .compile({ checkpointer });

  const result = await graph.invoke(
    { messages: [{ role: "user", content: "Write one sentence about Postgres checkpoints." }] },
    { configurable: { thread_id: "thread-1" } }
  );

  console.log(result.messages.at(-1)?.content);
  await pool.end();
}

main();
  1. Run the graph twice with the same thread ID to prove persistence.
    The first run creates the checkpoint. The second run uses the same thread_id, which is how LangGraph ties future invocations back to the same conversation or workflow.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, StateGraph } from "@langchain/langgraph";
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";
import { Pool } from "pg";

const State = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

async function run() {
  const model = new ChatOpenAI({
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  });

  const pool = new Pool({ connectionString: process.env.DATABASE_URL });
  const checkpointer = await PostgresSaver.fromPool(pool);
  await checkpointer.setup();

  const graph = new StateGraph(State)
    .addNode("assistant", async (state) => {
      const response = await model.invoke(state.messages);
      return { messages: [response] };
    })
    .addEdge("__start__", "assistant")
    .addEdge("assistant", "__end__")
    .compile({ checkpointer });

  const config = { configurable: { thread_id: "customer-42" } };

  const first = await graph.invoke(
    { messages: [{ role: "user", content: "Remember that my policy number is P-12345." }] },
    config
  );

  const second = await graph.invoke(
    { messages: [{ role: "user", content: "What policy number did I mention?" }] },
    config
  );

  console.log(first.messages.at(-1)?.content);
  console.log(second.messages.at(-1)?.content);

  await pool.end();
}

run();
  1. Add a real workflow pattern instead of a single-turn chat.
    In production, you usually want checkpoints around decisions, approvals, or document extraction. The same Postgres setup works for branching graphs because every step can resume from durable state.
import { Annotation, StateGraph } from "@langchain/langgraph";
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";
import { Pool } from "pg";

const WorkflowState = Annotation.Root({
  amountCents: Annotation<number>({ default: () => 0 }),
  approved: Annotation<boolean>({ default: () => false }),
});

async function workflowExample() {
  const pool = new Pool({ connectionString: process.env.DATABASE_URL });
  const checkpointer = await PostgresSaver.fromPool(pool);
  await checkpointer.setup();

	const graph = new StateGraph(WorkflowState)
    .addNode("validate", async (state) => ({
      approved: state.amountCents <= 100000,
    }))
    .addNode("route", async (state) => ({
      approved: state.approved,
    }))
    .addEdge("__start__", "validate")
    .addEdge("validate", "route")
    .addEdge("route", "__end__")
	.compile({ checkpointer });

	const result = await graph.invoke(
	  { amountCents: 75000 },
	  { configurable: { thread_id: "claim-9001" } }
	);

	console.log(result.approved);
	await pool.end();
}

workflowExample();

Testing It

Run PostgreSQL locally and make sure the database in DATABASE_URL exists before starting your script. Then execute the file with npx tsx src/index.ts and confirm you get output on both runs with the same thread_id. If you want to verify persistence directly, inspect the checkpoint tables created by checkpointer.setup() in your database. After restarting your Node process, invoke the same thread again and confirm LangGraph resumes using stored state instead of treating it as a fresh session.

Next Steps

  • Add tool nodes and persist tool results alongside message history.
  • Split your graph into supervisor/worker subgraphs for regulated workflows.
  • Store metadata like tenant ID and case ID in configurable so checkpoints map cleanly to business entities.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides