How to Integrate LangGraph for insurance with LangSmith for AI agents

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-insurancelangsmithai-agents

Insurance workflows need traceability. If you’re building claim triage, policy Q&A, or underwriting assistants, combining LangGraph for insurance with LangSmith gives you both orchestration and observability: the graph controls stateful decisioning, while LangSmith records what the agent did, where it branched, and why it failed.

Prerequisites

  • Python 3.10+
  • A LangChain/LangGraph-compatible environment
  • Installed packages:
    • langgraph
    • langchain
    • langchain-openai
    • langsmith
  • API keys configured:
    • OPENAI_API_KEY
    • LANGCHAIN_API_KEY or LANGSMITH_API_KEY
  • LangSmith project created in the dashboard
  • A basic insurance workflow defined:
    • claim intake
    • policy lookup
    • coverage decision
    • human review fallback

Integration Steps

  1. Install the packages and configure tracing.
pip install langgraph langchain langchain-openai langsmith
import os

os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "insurance-agent"
os.environ["LANGCHAIN_API_KEY"] = "your-langsmith-key"

LangSmith picks up traces automatically when tracing is enabled. For production systems, keep these values in your deployment secrets manager, not in source control.

  1. Build a LangGraph workflow for an insurance agent.

This example uses a simple state graph to route between claim review and policy lookup. In real insurance systems, you’d add document extraction, fraud checks, and human escalation.

from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END

class InsuranceState(TypedDict):
    claim_text: str
    intent: str
    result: str

def classify_intent(state: InsuranceState) -> InsuranceState:
    text = state["claim_text"].lower()
    if "policy" in text or "coverage" in text:
        state["intent"] = "policy_lookup"
    else:
        state["intent"] = "claim_review"
    return state

def policy_lookup(state: InsuranceState) -> InsuranceState:
    state["result"] = "Policy is active. Coverage includes collision and comprehensive."
    return state

def claim_review(state: InsuranceState) -> InsuranceState:
    state["result"] = "Claim requires manual review due to missing loss details."
    return state

def route(state: InsuranceState) -> Literal["policy_lookup", "claim_review"]:
    return state["intent"]

graph = StateGraph(InsuranceState)
graph.add_node("classify_intent", classify_intent)
graph.add_node("policy_lookup", policy_lookup)
graph.add_node("claim_review", claim_review)

graph.set_entry_point("classify_intent")
graph.add_conditional_edges(
    "classify_intent",
    route,
    {
        "policy_lookup": "policy_lookup",
        "claim_review": "claim_review",
    },
)
graph.add_edge("policy_lookup", END)
graph.add_edge("claim_review", END)

app = graph.compile()
  1. Add LangSmith tracing around graph execution.

LangGraph execution will be traced when LangSmith is configured correctly. If you want explicit control, wrap the run in a LangSmith trace context so each invocation is easy to find in the dashboard.

from langsmith import traceable

@traceable(name="insurance-agent-run")
def run_insurance_agent(claim_text: str):
    initial_state = {
        "claim_text": claim_text,
        "intent": "",
        "result": "",
    }
    return app.invoke(initial_state)

response = run_insurance_agent("Does my auto policy cover windshield damage?")
print(response)

This creates a named trace in LangSmith for each execution. In a real claims pipeline, that trace becomes your audit trail for debugging routing mistakes and model behavior.

  1. Attach an LLM node and keep the trace visible in LangSmith.

Most insurance agents need model-backed reasoning somewhere in the graph. Use a chat model node for extraction or summarization, then let LangSmith capture the prompt, response, latency, and errors.

from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def extract_claim_summary(state: InsuranceState) -> InsuranceState:
    messages = [
        SystemMessage(content="Summarize this insurance claim in one sentence."),
        HumanMessage(content=state["claim_text"]),
    ]
    summary = llm.invoke(messages)
    state["result"] = summary.content
    return state

# Example of replacing a node with LLM-backed processing
graph2 = StateGraph(InsuranceState)
graph2.add_node("extract_claim_summary", extract_claim_summary)
graph2.set_entry_point("extract_claim_summary")
graph2.add_edge("extract_claim_summary", END)

app2 = graph2.compile()
print(app2.invoke({"claim_text": "Rear-end collision at intersection.", "intent": "", "result": ""}))

If tracing is enabled, every llm.invoke(...) call shows up inside the same run tree as your graph execution.

  1. Push structured metadata into traces for insurance operations.

Metadata matters in regulated workflows. Add claim IDs, line of business, and escalation flags so your team can filter runs by business context inside LangSmith.

from langsmith import Client

client = Client()

run_id_response = client.create_run(
    name="insurance-claim-trace",
    inputs={"claim_text": "Water damage reported in kitchen."},
    project_name="insurance-agent",
)

print(run_id_response)

For most teams, explicit low-level run creation is less common than automatic tracing. Still, it’s useful when you want to attach custom metadata from upstream systems like claims platforms or CRM tools.

Testing the Integration

Run a sample input and confirm that both the graph output and the LangSmith trace appear.

test_input = {
    "claim_text": "Can you check if my policy covers hail damage?",
    "intent": "",
    "result": "",
}

output = app.invoke(test_input)
print(output)

Expected output:

{
  'claim_text': 'Can you check if my policy covers hail damage?',
  'intent': 'policy_lookup',
  'result': 'Policy is active. Coverage includes collision and comprehensive.'
}

What to verify in LangSmith:

  • A new run appears under your insurance-agent project
  • The run shows the graph steps in order
  • Any LLM calls appear as child spans
  • Inputs and outputs are visible for debugging

Real-World Use Cases

  • Claim triage agents that classify incoming FNOL messages and route them to fast-track or manual review.
  • Policy servicing assistants that answer coverage questions while preserving full execution traces for compliance.
  • Underwriting copilots that summarize submissions, flag missing documents, and record every decision path for audit review.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides