How to Integrate LangChain for insurance with PostgreSQL for production AI

By Cyprian AaronsUpdated 2026-04-21
langchain-for-insurancepostgresqlproduction-ai

Combining LangChain for insurance with PostgreSQL gives you a clean pattern for production AI agents: the model handles policy reasoning, claims triage, and document workflows, while PostgreSQL stores customer state, conversation history, policy metadata, and audit trails.

That matters because insurance systems need memory, traceability, and deterministic retrieval. You do not want an agent guessing from chat history alone when it can query structured policy data and persist every decision.

Prerequisites

  • Python 3.10+
  • A PostgreSQL 14+ instance running locally or in your cloud environment
  • A database user with CREATE TABLE, INSERT, SELECT, and UPDATE permissions
  • LangChain installed with the PostgreSQL integration package
  • An LLM provider configured for LangChain
  • Environment variables set for:
    • DATABASE_URL
    • your model API key, such as OPENAI_API_KEY

Install the core packages:

pip install langchain langchain-openai langchain-postgres psycopg[binary]

Integration Steps

  1. Set up PostgreSQL for agent state and insurance records.

Use a dedicated schema so your agent data does not mix with core policy tables. For production, keep conversational memory separate from claims and underwriting tables.

import os
import psycopg

DATABASE_URL = os.environ["DATABASE_URL"]

with psycopg.connect(DATABASE_URL) as conn:
    with conn.cursor() as cur:
        cur.execute("""
            CREATE TABLE IF NOT EXISTS insurance_claims (
                id SERIAL PRIMARY KEY,
                claim_id TEXT UNIQUE NOT NULL,
                customer_id TEXT NOT NULL,
                policy_number TEXT NOT NULL,
                status TEXT NOT NULL,
                summary TEXT NOT NULL,
                created_at TIMESTAMPTZ DEFAULT NOW()
            )
        """)
        conn.commit()
  1. Create a PostgreSQL-backed chat history store for LangChain.

For production agents, use persistent message history instead of in-memory buffers. PostgresChatMessageHistory is the right primitive when you need replayable conversations across sessions.

from langchain_postgres import PostgresChatMessageHistory

history = PostgresChatMessageHistory(
    connection_string=DATABASE_URL,
    session_id="claim-session-1001",
    table_name="langchain_chat_history"
)

history.add_user_message("Customer reports water damage in kitchen.")
history.add_ai_message("I will check policy coverage and prior claims.")
  1. Build a LangChain chain that uses PostgreSQL context.

Here the agent pulls structured claim data from PostgreSQL, then uses that context to answer or route the request. This is the pattern you want in insurance: retrieve first, generate second.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an insurance claims assistant. Use only the provided claim data."),
    ("human", "Claim context: {claim_context}\n\nQuestion: {question}")
])

def get_claim_context(claim_id: str) -> str:
    with psycopg.connect(DATABASE_URL) as conn:
        with conn.cursor() as cur:
            cur.execute(
                "SELECT claim_id, customer_id, policy_number, status, summary FROM insurance_claims WHERE claim_id = %s",
                (claim_id,)
            )
            row = cur.fetchone()
            if not row:
                return "No claim found."
            return f"claim_id={row[0]}, customer_id={row[1]}, policy_number={row[2]}, status={row[3]}, summary={row[4]}"

chain = prompt | llm

result = chain.invoke({
    "claim_context": get_claim_context("CLM-1001"),
    "question": "Should this be escalated to a human adjuster?"
})
print(result.content)
  1. Persist new agent decisions back into PostgreSQL.

Production agents should write decisions and status changes back to the database. That gives you auditability and lets downstream systems act on the result.

def save_claim_decision(claim_id: str, decision: str) -> None:
    with psycopg.connect(DATABASE_URL) as conn:
        with conn.cursor() as cur:
            cur.execute(
                "UPDATE insurance_claims SET status = %s WHERE claim_id = %s",
                (decision, claim_id)
            )
            conn.commit()

save_claim_decision("CLM-1001", "needs_human_review")
  1. Wire retrieval plus memory into one agent flow.

This is where LangChain becomes useful in real operations: it can read structured claim data from PostgreSQL and also keep session history for follow-up questions. That combination is what makes an agent feel stateful without losing control of the source of truth.

from langchain_core.runnables import RunnableLambda

def build_input(payload):
    session_history = PostgresChatMessageHistory(
        connection_string=DATABASE_URL,
        session_id=payload["session_id"],
        table_name="langchain_chat_history"
    )
    messages = session_history.messages
    return {
        "claim_context": get_claim_context(payload["claim_id"]),
        "question": payload["question"],
        "history_count": len(messages)
    }

insurance_agent = RunnableLambda(build_input) | prompt | llm

response = insurance_agent.invoke({
    "session_id": "claim-session-1001",
    "claim_id": "CLM-1001",
    "question": "What additional documents do we need?"
})

print(response.content)

Testing the Integration

Run a simple end-to-end test by inserting a sample claim, querying it through LangChain, and checking that PostgreSQL updates correctly.

with psycopg.connect(DATABASE_URL) as conn:
    with conn.cursor() as cur:
        cur.execute("""
            INSERT INTO insurance_claims (claim_id, customer_id, policy_number, status, summary)
            VALUES (%s, %s, %s, %s, %s)
            ON CONFLICT (claim_id) DO UPDATE SET
                status = EXCLUDED.status,
                summary = EXCLUDED.summary
        """, ("CLM-1001", "CUST-9001", "POL-7788", "open", "Kitchen water damage after pipe burst"))
        conn.commit()

test_response = chain.invoke({
    "claim_context": get_claim_context("CLM-1001"),
    "question": "Summarize this claim for an adjuster."
})

print(test_response.content)

Expected output:

This is an open claim for customer CUST-9001 under policy POL-7788.
The reported loss is kitchen water damage after a pipe burst.
It should be reviewed by an adjuster for coverage assessment.

Real-World Use Cases

  • Claims intake agents that read policy and claim records from PostgreSQL, then classify severity and route cases automatically.
  • Underwriting assistants that pull historical submissions and produce risk summaries for human review.
  • Customer service bots that maintain long-lived conversation state in PostgreSQL while answering coverage questions from structured data.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides