How to Integrate LangGraph for insurance with LangSmith for multi-agent systems
Combining LangGraph for insurance with LangSmith gives you a practical control plane for multi-agent insurance workflows. You get deterministic orchestration for claims, underwriting, and policy servicing, plus trace-level visibility into what each agent did, why it did it, and where the workflow failed.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An LLM provider key set in your environment
- •A LangSmith account and project created
- •Access to your insurance domain tools:
- •policy lookup API
- •claims API
- •document extraction/OCR service
- •fraud or risk scoring service
Install the core packages:
pip install langgraph langchain langsmith langchain-openai
Set your environment variables:
export OPENAI_API_KEY="your-openai-key"
export LANGSMITH_API_KEY="your-langsmith-key"
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="insurance-multi-agent"
Integration Steps
- •
Define your agent roles and workflow boundaries
In insurance systems, don’t let one giant agent do everything. Split responsibilities into nodes: intake, policy validation, claims assessment, escalation, and final decision.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, END
class InsuranceState(TypedDict):
claim_text: str
policy_number: str
extracted_facts: dict
decision: str
notes: list[str]
- •
Build a LangGraph workflow for the insurance process
Use
StateGraphto orchestrate the agents. Each node is a focused function that updates state. This is the part that gives you repeatability and control.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def intake_node(state: InsuranceState):
prompt = f"Extract key facts from this claim:\n{state['claim_text']}"
result = llm.invoke(prompt)
return {
"extracted_facts": {"summary": result.content},
"notes": state.get("notes", []) + ["Intake completed"]
}
def policy_check_node(state: InsuranceState):
prompt = f"Check if policy {state['policy_number']} covers this claim:\n{state['extracted_facts']}"
result = llm.invoke(prompt)
return {
"decision": result.content,
"notes": state.get("notes", []) + ["Policy check completed"]
}
workflow = StateGraph(InsuranceState)
workflow.add_node("intake", intake_node)
workflow.add_node("policy_check", policy_check_node)
workflow.set_entry_point("intake")
workflow.add_edge("intake", "policy_check")
workflow.add_edge("policy_check", END)
app = workflow.compile()
- •
Enable LangSmith tracing for every node execution
LangGraph runs the workflow; LangSmith captures traces. For most setups, environment variables are enough if you use supported LangChain/LangGraph integrations. If you want explicit tracing around custom logic, use
traceable.
from langsmith import traceable
@traceable(name="claims_intake")
def traced_intake(state: InsuranceState):
return intake_node(state)
@traceable(name="policy_validation")
def traced_policy_check(state: InsuranceState):
return policy_check_node(state)
Then wire the traced functions into the graph:
workflow = StateGraph(InsuranceState)
workflow.add_node("intake", traced_intake)
workflow.add_node("policy_check", traced_policy_check)
workflow.set_entry_point("intake")
workflow.add_edge("intake", "policy_check")
workflow.add_edge("policy_check", END)
app = workflow.compile()
- •
Run the multi-agent flow and inspect traces in LangSmith
Execute the graph with real insurance data. Each node call becomes visible in LangSmith as a step in the run tree.
input_state = {
"claim_text": "Customer reports water damage from burst pipe on 2024-11-03.",
"policy_number": "POL-883144",
"extracted_facts": {},
"decision": "",
"notes": []
}
result = app.invoke(input_state)
print(result)
In LangSmith, you should see:
- •one parent run for the graph execution
- •child runs for
claims_intakeandpolicy_validation - •inputs/outputs for each node
- •latency and token usage per call
- •
Add tool calls for production insurance systems
Real workflows need external systems. Use tools inside nodes for policy lookup or claims verification, then keep those calls traced as part of the same run.
import requests
def fetch_policy(policy_number: str) -> dict:
response = requests.get(
f"https://api.your-insurer.com/policies/{policy_number}",
timeout=10,
headers={"Authorization": f"Bearer {os.environ['INSURANCE_API_TOKEN']}"}
)
response.raise_for_status()
return response.json()
@traceable(name="policy_lookup")
def policy_lookup_node(state: InsuranceState):
policy_data = fetch_policy(state["policy_number"])
return {
"extracted_facts": {**state.get("extracted_facts", {}), "policy": policy_data},
"notes": state.get("notes", []) + ["Policy lookup completed"]
}
Testing the Integration
Use a minimal end-to-end run and confirm both orchestration and tracing work.
test_state = {
"claim_text": "Fire damage reported in kitchen after electrical fault.",
"policy_number": "POL-100200",
"extracted_facts": {},
"decision": "",
"notes": []
}
output = app.invoke(test_state)
print(output["decision"])
print(output["notes"])
Expected output:
Coverage likely applies based on reported peril.
['Intake completed', 'Policy check completed']
If tracing is configured correctly, open your LangSmith project and verify:
- •the run appears under
insurance-multi-agent - •node-level spans are present
- •inputs and outputs are captured without manual logging code
Real-World Use Cases
- •Claims triage: one agent extracts facts from FNOL text, another checks policy coverage, another flags suspicious claims for SIU review.
- •Underwriting assistance: route applicant data through document parsing, risk scoring, and exception handling agents with full traceability.
- •Customer servicing: orchestrate policy changes, endorsements, billing questions, and escalation paths while keeping every decision auditable.
The main value here is not just “agents working together.” It’s controlled execution plus observability. In insurance, that combination is what makes multi-agent systems supportable in production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit