How to Integrate LangGraph for insurance with LangSmith for startups

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-insurancelangsmithstartups

LangGraph gives you the orchestration layer for insurance workflows: claims intake, policy checks, underwriting triage, and escalation paths. LangSmith gives you the observability layer: traces, prompts, tool calls, failures, and latency. Put them together and you get an agent system that can route insurance cases deterministically while still being debuggable enough for a startup team shipping fast.

Prerequisites

  • Python 3.10+
  • An active LangSmith account
  • A LangSmith API key
  • A LangChain/LangGraph-compatible environment
  • Insurance workflow definitions ready to model as graph nodes
  • Access to your model provider key, such as OpenAI or Anthropic

Install the packages:

pip install langgraph langsmith langchain-openai

Set environment variables:

export LANGSMITH_API_KEY="lsv2_..."
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="insurance-agent-startup"
export OPENAI_API_KEY="sk-..."

Integration Steps

  1. Define the insurance workflow as a LangGraph state machine.

For insurance, keep the graph explicit. Claims intake should not be an unstructured prompt chain; it should be a stateful workflow with clear transitions.

from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END

class InsuranceState(TypedDict):
    claim_text: str
    risk_level: Literal["low", "medium", "high"]
    decision: str

def intake_claim(state: InsuranceState) -> InsuranceState:
    text = state["claim_text"].lower()
    if "injury" in text or "hospital" in text:
        state["risk_level"] = "high"
    elif "damage" in text:
        state["risk_level"] = "medium"
    else:
        state["risk_level"] = "low"
    return state

def route_claim(state: InsuranceState) -> str:
    return "manual_review" if state["risk_level"] == "high" else "auto_decide"

def manual_review(state: InsuranceState) -> InsuranceState:
    state["decision"] = "escalate_to_adjuster"
    return state

def auto_decide(state: InsuranceState) -> InsuranceState:
    state["decision"] = "approve_fast_track"
    return state

graph = StateGraph(InsuranceState)
graph.add_node("intake_claim", intake_claim)
graph.add_node("manual_review", manual_review)
graph.add_node("auto_decide", auto_decide)

graph.set_entry_point("intake_claim")
graph.add_conditional_edges("intake_claim", route_claim, {
    "manual_review": "manual_review",
    "auto_decide": "auto_decide",
})
graph.add_edge("manual_review", END)
graph.add_edge("auto_decide", END)

app = graph.compile()
  1. Add LangSmith tracing so every node execution is visible.

LangSmith works best when tracing is enabled at the environment level and your app runs through its standard integrations. For custom code paths, use the SDK client to annotate runs and capture metadata.

import os
from langsmith import Client

client = Client(api_key=os.environ["LANGSMITH_API_KEY"])

run = client.create_run(
    project_name=os.environ["LANGSMITH_PROJECT"],
    name="insurance-claims-workflow",
    run_type="chain",
    inputs={"claim_text": "Customer reported hospital visit after accident"},
)

client.update_run(
    run_id=run.id,
    outputs={"status": "started"},
)
print(f"Created LangSmith run: {run.id}")
  1. Wrap model calls inside graph nodes with traceable execution.

If your insurance workflow uses an LLM for classification or summarization, call it from inside a node so LangSmith can capture the full path.

from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def summarize_claim(state: InsuranceState) -> InsuranceState:
    messages = [
        SystemMessage(content="Summarize this insurance claim for an adjuster."),
        HumanMessage(content=state["claim_text"]),
    ]
    response = llm.invoke(messages)
    state["decision"] = response.content
    return state

If you want richer traces in LangSmith, keep node boundaries small and meaningful. That makes it obvious whether failures come from intake logic, model behavior, or routing.

  1. Execute the graph with LangSmith tracing enabled.

This is where the two tools meet: LangGraph controls the flow, while LangSmith records what happened at each step.

result = app.invoke(
    {
        "claim_text": "Customer reported hospital visit after car accident",
        "risk_level": "low",
        "decision": "",
    }
)

print(result)

If LANGSMITH_TRACING=true is set correctly, your graph run appears in the LangSmith project dashboard without extra wiring in most standard setups.

  1. Tag runs with startup-friendly metadata.

Startups need to slice traces by customer segment, claim type, or environment. Use metadata so you can compare behavior across test traffic and production traffic.

from langsmith import Client

client = Client()

client.create_run(
    project_name="insurance-agent-startup",
    name="claims-triage-prod",
    run_type="chain",
    inputs={"claim_text": "Water damage in apartment"},
    tags=["prod", "claims", "triage"],
    extra={"metadata": {"tenant_id": "acme-insurance", "env": "prod"}},
)

Testing the Integration

Run a simple end-to-end invocation and confirm both workflow output and trace visibility.

test_state = {
    "claim_text": "Customer reported minor roof damage after storm",
    "risk_level": "low",
    "decision": "",
}

output = app.invoke(test_state)
print(output)

Expected output:

{
  'claim_text': 'Customer reported minor roof damage after storm',
  'risk_level': 'medium',
  'decision': 'approve_fast_track'
}

In LangSmith, you should see:

  • One top-level run for the graph invocation
  • Node-level spans for intake_claim and auto_decide
  • Input/output payloads attached to each step
  • Latency data for each node

Real-World Use Cases

  • Claims triage agent that classifies severity, routes high-risk cases to human adjusters, and logs every decision path in LangSmith.
  • Underwriting assistant that gathers applicant data through a graph, calls policy rules at each step, and lets your team inspect failures by tenant or product line.
  • Policy Q&A agent that answers customer questions with retrieval plus guardrails, while LangSmith tracks prompt changes and regression issues across releases.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides