How to Integrate LangChain for insurance with PostgreSQL for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
langchain-for-insurancepostgresqlmulti-agent-systems

When you build multi-agent systems for insurance, the hard part is not calling an LLM. It is keeping policy data, claims history, underwriting rules, and agent state consistent across multiple workers. LangChain for insurance gives you the orchestration layer; PostgreSQL gives you durable storage for conversation state, retrieval indexes, and audit trails.

Prerequisites

  • Python 3.10+
  • A running PostgreSQL instance
  • psycopg or psycopg2-binary
  • langchain
  • langchain-community
  • langchain-postgres
  • Access to your insurance LLM provider configured through LangChain
  • A database user with permissions to create tables and write rows

Install the packages:

pip install langchain langchain-community langchain-postgres psycopg[binary]

Set your environment variables:

export POSTGRES_URL="postgresql://agent_user:agent_pass@localhost:5432/insurance_ai"
export OPENAI_API_KEY="your_key_here"

Integration Steps

  1. Create a PostgreSQL connection and schema for agent state

For multi-agent systems, each agent should persist its own working memory and shared case context. Use PostgreSQL as the source of truth so workers can recover after restarts.

import os
from sqlalchemy import create_engine, text

POSTGRES_URL = os.environ["POSTGRES_URL"]
engine = create_engine(POSTGRES_URL)

with engine.begin() as conn:
    conn.execute(text("""
        CREATE TABLE IF NOT EXISTS agent_case_state (
            case_id TEXT PRIMARY KEY,
            customer_id TEXT NOT NULL,
            status TEXT NOT NULL,
            summary TEXT NOT NULL,
            updated_at TIMESTAMPTZ DEFAULT now()
        )
    """))

    conn.execute(text("""
        INSERT INTO agent_case_state (case_id, customer_id, status, summary)
        VALUES ('CLM-1001', 'CUST-77', 'open', 'Initial claim received for windshield damage.')
        ON CONFLICT (case_id) DO NOTHING
    """))
  1. Configure a LangChain model for the insurance agent

Use LangChain’s chat model interface to drive policy Q&A, claim triage, or underwriting review. The important part is that the model is stateless; PostgreSQL holds the persistent context.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o-mini",
    temperature=0.1,
)

triage_prompt = """
You are an insurance claims triage agent.
Classify the case into one of: low_risk, needs_review, escalate.
Return only the label and one short reason.
"""

response = llm.invoke([
    {"role": "system", "content": triage_prompt},
    {"role": "user", "content": "Customer reports cracked windshield after road debris impact."}
])

print(response.content)
  1. Read case context from PostgreSQL before each agent decision

In a multi-agent workflow, every specialist agent should fetch the latest shared state before acting. That avoids stale decisions when another agent has already updated the claim.

from sqlalchemy import text

def load_case(case_id: str):
    with engine.begin() as conn:
        row = conn.execute(
            text("SELECT case_id, customer_id, status, summary FROM agent_case_state WHERE case_id = :case_id"),
            {"case_id": case_id},
        ).mappings().first()
    return dict(row) if row else None

case = load_case("CLM-1001")
print(case)

Now pass that state into your LangChain prompt:

prompt = f"""
Case ID: {case['case_id']}
Customer ID: {case['customer_id']}
Status: {case['status']}
Summary: {case['summary']}

Decide whether this claim needs human review.
"""

decision = llm.invoke(prompt)
print(decision.content)
  1. Persist the agent output back to PostgreSQL

This is where LangChain and PostgreSQL become a real system instead of a demo. Store the decision so downstream agents — fraud review, policy verification, payout estimation — can consume it.

def update_case(case_id: str, status: str, summary: str):
    with engine.begin() as conn:
        conn.execute(
            text("""
                UPDATE agent_case_state
                SET status = :status,
                    summary = :summary,
                    updated_at = now()
                WHERE case_id = :case_id
            """),
            {"case_id": case_id, "status": status, "summary": summary},
        )

update_case(
    "CLM-1001",
    "needs_review",
    "Claim triaged by LLM: possible coverage check required due to impact damage."
)
  1. Add vector search in PostgreSQL for policy retrieval

For insurance workflows, agents need access to policy clauses, endorsements, exclusions, and SOPs. Store embeddings in PostgreSQL using PGVector, then retrieve relevant documents during reasoning.

from langchain_community.embeddings import OpenAIEmbeddings
from langchain_postgres.vectorstores import PGVector
from langchain_core.documents import Document

embeddings = OpenAIEmbeddings()

vectorstore = PGVector(
    connection=POSTGRES_URL,
    embeddings=embeddings,
    collection_name="insurance_policy_docs",
)

docs = [
    Document(page_content="Windshield damage is covered if caused by accidental road debris impact."),
    Document(page_content="Wear-and-tear damage is excluded unless caused by a covered peril."),
]

vectorstore.add_documents(docs)

results = vectorstore.similarity_search("Is windshield damage covered by policy?", k=1)
print(results[0].page_content)

Testing the Integration

Run a simple end-to-end check: load state from PostgreSQL, ask LangChain to classify it, then write back the result.

case = load_case("CLM-1001")

prompt = f"""
You are an insurance claims assistant.
Case summary: {case['summary']}
Return one of: approved_for_review, reject, escalate.
"""

result = llm.invoke(prompt).content.strip()

update_case(
    "CLM-1001",
    result,
    f"LLM decision stored in PostgreSQL: {result}"
)

with engine.begin() as conn:
    row = conn.execute(
        text("SELECT case_id, status, summary FROM agent_case_state WHERE case_id = 'CLM-1001'")
    ).mappings().first()

print(row)

Expected output:

{'case_id': 'CLM-1001', 'status': 'approved_for_review', 'summary': 'LLM decision stored in PostgreSQL: approved_for_review'}

Real-World Use Cases

  • Claims triage pipeline

    • One agent classifies severity.
    • Another checks policy coverage using vector search.
    • A third writes structured updates into PostgreSQL for adjusters.
  • Underwriting assistant

    • Pull applicant history from PostgreSQL.
    • Retrieve underwriting guidelines from embedded documents.
    • Let separate agents score risk factors and store their recommendations.
  • Fraud investigation workflow

    • One agent flags anomalies in claim patterns.
    • Another cross-checks prior claims stored in PostgreSQL.
    • A supervisor agent aggregates findings and escalates cases with evidence trails.

If you want this production-ready, add transaction boundaries around each agent step and treat PostgreSQL as the coordination layer between agents. That gives you replayability, auditability, and clean handoff semantics across your LangChain-based insurance system.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides