How to Integrate LangGraph for insurance with LangSmith for RAG
Combining LangGraph for insurance with LangSmith gives you two things you actually need in production: controlled agent execution and observability. In an insurance RAG system, that means you can route claims, policy, and underwriting queries through a graph while tracing retrieval quality, prompt behavior, and tool calls end to end.
Prerequisites
- •Python 3.10+
- •A LangGraph-compatible agent project already set up
- •A LangSmith account and API key
- •Access to your LLM provider, for example OpenAI or Anthropic
- •A vector store or retriever for insurance documents
- •Environment variables configured:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECT=insurance-rag - •
OPENAI_API_KEYor equivalent
- •
Integration Steps
- •
Install the required packages
You need LangGraph for orchestration, LangChain integrations for RAG components, and LangSmith for tracing.
pip install langgraph langchain langchain-openai langchain-community langsmith - •
Configure LangSmith tracing before building the graph
LangSmith traces are easiest to use when enabled at process startup. This lets every node execution, retriever call, and LLM invocation show up in the same run tree.
import os os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "insurance-rag" os.environ["LANGSMITH_API_KEY"] = "lsv2_your_api_key_here" os.environ["OPENAI_API_KEY"] = "sk-your-openai-key" - •
Build the RAG components with traceable LangChain objects
For insurance use cases, keep retrieval explicit. That makes it easier to inspect whether the model answered from policy docs, claims guidelines, or underwriting rules.
from langchain_openai import ChatOpenAI, OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain_core.documents import Document docs = [ Document(page_content="Policy A covers water damage if caused by burst pipes."), Document(page_content="Claims over $10,000 require supervisor approval."), Document(page_content="Underwriting excludes pre-existing structural damage."), ] embeddings = OpenAIEmbeddings(model="text-embedding-3-small") vectorstore = FAISS.from_documents(docs, embeddings) retriever = vectorstore.as_retriever(search_kwargs={"k": 2}) llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) - •
Create a LangGraph workflow that calls the retriever and model
In LangGraph, define state and nodes explicitly. That gives you deterministic control over the RAG flow: retrieve first, then generate an answer grounded in retrieved context.
from typing import TypedDict, List from langgraph.graph import StateGraph, END class GraphState(TypedDict): question: str context: List[str] answer: str def retrieve(state: GraphState): docs = retriever.invoke(state["question"]) return {"context": [d.page_content for d in docs]} def generate(state: GraphState): context_text = "\n".join(state["context"]) prompt = ( "You are an insurance assistant.\n" "Answer only using the provided context.\n\n" f"Context:\n{context_text}\n\n" f"Question: {state['question']}" ) response = llm.invoke(prompt) return {"answer": response.content} builder = StateGraph(GraphState) builder.add_node("retrieve", retrieve) builder.add_node("generate", generate) builder.set_entry_point("retrieve") builder.add_edge("retrieve", "generate") builder.add_edge("generate", END) app = builder.compile() - •
Run the graph with LangSmith tracing enabled
Once tracing is on, every call through
app.invoke()is captured by LangSmith automatically if your environment is configured correctly. You can also add metadata to make filtering easier in the UI.result = app.invoke( {"question": "Does Policy A cover water damage from a burst pipe?"}, config={ "run_name": "insurance-rag-query", "tags": ["insurance", "rag", "policy"], "metadata": {"customer_segment": "personal_lines"} } ) print(result["answer"])
Testing the Integration
Use a query that should be answered directly from your sample documents. Then confirm two things: the output is grounded in retrieved context, and the run appears in LangSmith under your project.
test_result = app.invoke(
{"question": "What approval is required for claims over $10,000?"},
config={
"run_name": "integration-test",
"tags": ["test", "langgraph", "langsmith"]
}
)
print("ANSWER:", test_result["answer"])
Expected output:
ANSWER: Claims over $10,000 require supervisor approval.
In LangSmith, you should see:
- •One top-level run named
integration-test - •Child runs for retrieval and generation
- •The prompt input and model output
- •Tags and metadata attached to the run
Real-World Use Cases
- •Claims triage assistant
- •Route FNOL questions through a graph that retrieves policy clauses, checks claim thresholds, and escalates to a human adjuster when needed.
- •Policy Q&A copilot
- •Answer customer service questions using indexed policy documents while tracing which source chunks were used for each answer.
- •Underwriting support agent
- •Pull underwriting rules from internal documents, summarize eligibility constraints, and log every decision path in LangSmith for audit review.
The main pattern here is simple: use LangGraph to control the insurance workflow, then use LangSmith to inspect what happened at each step. That combination gives you a system you can debug when retrieval fails, defend when auditors ask questions, and improve when answers drift off policy.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit