How to Integrate LangGraph for insurance with LangSmith for production AI
Combining LangGraph for insurance with LangSmith gives you a production-grade pattern for regulated agent workflows: deterministic orchestration from LangGraph, plus tracing, evaluation, and debugging from LangSmith. That matters when you’re handling claims intake, policy servicing, or underwriting assistants where every step needs auditability and repeatable behavior.
Prerequisites
- •Python 3.10+
- •A LangGraph-based insurance workflow already defined or ready to build
- •A LangSmith account and API key
- •Installed packages:
- •
langgraph - •
langchain-core - •
langsmith - •
python-dotenv
- •
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=insurance-agent-prod
- •
- •Access to your model provider, such as OpenAI or Anthropic
Integration Steps
- •
Install the SDKs and set up tracing
LangSmith tracing is mostly configuration-driven. Once enabled, your LangGraph runs will show up as traces without changing the orchestration logic.
pip install langgraph langchain-core langsmith python-dotenvimport os from dotenv import load_dotenv load_dotenv() os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "insurance-agent-prod" os.environ["LANGSMITH_API_KEY"] = os.getenv("LANGSMITH_API_KEY", "") - •
Build a LangGraph workflow for an insurance use case
Use LangGraph to model the flow explicitly. For insurance, a common pattern is: classify request → extract entities → route to the right handler → produce a response.
from typing import TypedDict from langgraph.graph import StateGraph, START, END class InsuranceState(TypedDict): message: str intent: str result: str def classify_intent(state: InsuranceState): text = state["message"].lower() if "claim" in text: return {"intent": "claims"} if "policy" in text: return {"intent": "policy"} return {"intent": "general"} def handle_claims(state: InsuranceState): return {"result": f"Claims workflow started for: {state['message']}"} def handle_policy(state: InsuranceState): return {"result": f"Policy servicing workflow started for: {state['message']}"} def handle_general(state: InsuranceState): return {"result": f"General support response for: {state['message']}"} graph = StateGraph(InsuranceState) graph.add_node("classify_intent", classify_intent) graph.add_node("handle_claims", handle_claims) graph.add_node("handle_policy", handle_policy) graph.add_node("handle_general", handle_general) graph.add_edge(START, "classify_intent") def route(state: InsuranceState): return state["intent"] graph.add_conditional_edges( "classify_intent", route, { "claims": "handle_claims", "policy": "handle_policy", "general": "handle_general", }, ) graph.add_edge("handle_claims", END) graph.add_edge("handle_policy", END) graph.add_edge("handle_general", END) app = graph.compile() - •
Attach LangSmith tracing to the run
When you invoke the compiled graph, LangSmith captures node-level execution automatically if tracing is enabled. If you want explicit metadata for insurance workloads, pass tags and metadata into the run config.
from langchain_core.runnables import RunnableConfig config = RunnableConfig( tags=["insurance", "production", "claims-routing"], metadata={ "tenant": "acme-insurance", "channel": "web", "workflow_version": "v1" } ) result = app.invoke( {"message": "I need to file a claim for water damage"}, config=config, ) print(result) - •
Add a traced LLM node for production debugging
In real systems, one node usually calls an LLM for extraction or summarization. Wrap that call inside the graph so LangSmith records prompt inputs, outputs, latency, and failures.
from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) prompt = ChatPromptTemplate.from_messages([ ("system", "Extract structured insurance details from the user's message."), ("human", "{message}") ]) def extract_details(state: InsuranceState): chain = prompt | llm response = chain.invoke({"message": state["message"]}) return {"result": response.content} graph2 = StateGraph(InsuranceState) graph2.add_node("extract_details", extract_details) graph2.add_edge(START, "extract_details") graph2.add_edge("extract_details", END) traced_app = graph2.compile() output = traced_app.invoke( {"message": "Policy number 12345 needs beneficiary update"}, config={"tags": ["insurance", "extraction"]} ) print(output) - •
Use LangSmith datasets and evaluations for regression testing
Production AI needs repeatable checks. Store example insurance requests in a dataset and run evaluations against your workflow before shipping changes.
Testing the Integration
Use a simple invocation first. If tracing is wired correctly, you should see the run in your LangSmith project after execution.
from langchain_core.runnables import RunnableConfig
test_input = {"message": "I want to check my policy coverage"}
config = RunnableConfig(
tags=["smoke-test", "insurance"],
metadata={"test_case": "policy_coverage_check"}
)
output = app.invoke(test_input, config=config)
print(output)
Expected output:
{'message': 'I want to check my policy coverage', 'intent': 'policy', 'result': 'Policy servicing workflow started for: I want to check my policy coverage'}
In LangSmith, confirm:
- •The project name is
insurance-agent-prod - •The trace includes your tags
- •Each LangGraph node appears as a step in the run tree
- •Inputs and outputs are visible for debugging
Real-World Use Cases
- •
Claims intake assistant
Route incoming claims by line of business, extract incident details with an LLM node, and trace every decision path in LangSmith. - •
Policy servicing copilot
Handle updates like address changes, beneficiary edits, and coverage questions with deterministic branching plus full run observability. - •
Underwriting triage agent
Classify submissions, summarize risk factors, and send edge cases to human review while keeping an audit trail of prompts and outputs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit