How to Integrate Next.js for insurance with Vercel AI SDK for multi-agent systems
Combining Next.js for insurance with the Vercel AI SDK gives you a clean way to build agentic insurance workflows that can move from UI to decisioning without glue-code sprawl. The pattern is useful when you need multiple agents handling intake, underwriting, claims triage, and policy lookup while still keeping the user experience in a Next.js app.
The real value is orchestration. Next.js handles the customer-facing surface and server routes, while Vercel AI SDK gives you structured tool calling, streaming responses, and multi-step agent coordination.
Prerequisites
- •Node.js 18+ installed
- •A Next.js app already created
- •Vercel AI SDK installed:
- •
npm install ai @ai-sdk/openai
- •
- •A model provider configured, such as OpenAI:
- •
OPENAI_API_KEYset in.env.local
- •
- •Your insurance backend exposed as an HTTP API:
- •policy lookup
- •claims status
- •underwriting rules
- •Python 3.10+ if you want to run local integration checks or orchestration scripts alongside the app
- •Familiarity with:
- •
app/api/.../route.ts - •
streamText - •
generateText - •
tool
- •
Integration Steps
- •
Define the insurance services your agents will call
Start by wrapping your insurance system endpoints in a thin Python client. This is useful when your multi-agent stack includes an external orchestrator or validation layer that needs to inspect requests before they reach Next.js.
import os import requests INSURANCE_API_BASE = os.getenv("INSURANCE_API_BASE", "http://localhost:8000") class InsuranceClient: def get_policy(self, policy_id: str): return requests.get(f"{INSURANCE_API_BASE}/policies/{policy_id}", timeout=10).json() def get_claim(self, claim_id: str): return requests.get(f"{INSURANCE_API_BASE}/claims/{claim_id}", timeout=10).json() def submit_claim_note(self, claim_id: str, note: str): payload = {"note": note} return requests.post( f"{INSURANCE_API_BASE}/claims/{claim_id}/notes", json=payload, timeout=10, ).json() - •
Expose a Next.js API route that uses Vercel AI SDK tools
In the app layer, create a route that lets the model call insurance tools. This is where
streamTextandtooldo the heavy lifting.from textwrap import dedent route_ts = dedent(""" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o-mini'), messages, tools: { getPolicy: tool({ description: 'Fetch policy details by policy ID', parameters: z.object({ policyId: z.string(), }), execute: async ({ policyId }) => { const res = await fetch(`${process.env.INSURANCE_API_BASE}/policies/${policyId}`); return await res.json(); }, }), getClaim: tool({ description: 'Fetch claim details by claim ID', parameters: z.object({ claimId: z.string(), }), execute: async ({ claimId }) => { const res = await fetch(`${process.env.INSURANCE_API_BASE}/claims/${claimId}`); return await res.json(); }, }), }, }); return result.toDataStreamResponse(); } """) print(route_ts) - •
Add a Python multi-agent orchestrator for routing tasks
For multi-agent systems, keep routing logic outside the UI. A simple Python coordinator can decide whether an underwriting agent, claims agent, or policy agent should handle the request before sending it into your Next.js-backed endpoint.
from dataclasses import dataclass @dataclass class AgentTask: kind: str payload: dict class Orchestrator: def route(self, text: str) -> AgentTask: lowered = text.lower() if "claim" in lowered or "accident" in lowered: return AgentTask(kind="claims", payload={"query": text}) if "policy" in lowered or "coverage" in lowered: return AgentTask(kind="policy", payload={"query": text}) return AgentTask(kind="triage", payload={"query": text}) def build_nextjs_request(self, task: AgentTask): return { "messages": [ {"role": "system", "content": f"You are the {task.kind} insurance agent."}, {"role": "user", "content": task.payload["query"]}, ] } - •
Call the Next.js AI route from Python and forward results between agents
This is the glue step. Your orchestrator can call the Next.js route, receive streamed or JSON-backed output, then pass that response into another agent for validation or summarization.
import requests class NextJsAgentClient: def __init__(self, base_url: str): self.base_url = base_url.rstrip("/") def chat(self, messages): response = requests.post( f"{self.base_url}/api/insurance-agent", json={"messages": messages}, timeout=30, ) response.raise_for_status() return response.text orchestrator = Orchestrator() task = orchestrator.route("Check my coverage for a recent water damage claim") payload = orchestrator.build_nextjs_request(task) client = NextJsAgentClient("http://localhost:3000") result_text = client.chat(payload["messages"]) print(result_text) - •
Add structured handoff between agents using JSON
In production multi-agent systems, don’t pass raw prose between agents if you can avoid it. Have one agent emit structured JSON so downstream agents can validate policy numbers, claim IDs, and recommended actions.
import json def parse_agent_output(raw_text: str) -> dict: try: return json.loads(raw_text) except json.JSONDecodeError: return { "status": "unstructured", "message": raw_text, } def validate_handoff(data: dict) -> bool: required_keys = {"status", "summary"} return required_keys.issubset(data.keys())
Testing the Integration
Use a simple smoke test that hits your Next.js route through Python and verifies that tool calling works end to end.
import requests
payload = {
"messages": [
{"role": "user", "content": "Look up policy POL-12345 and summarize coverage."}
]
}
resp = requests.post("http://localhost:3000/api/insurance-agent", json=payload, timeout=30)
print("status:", resp.status_code)
print("body:", resp.text[:500])
Expected output:
status: 200
body: ...streamed AI response containing either a direct summary or a tool call result...
If the integration is working correctly, you should see:
- •HTTP 200 from the Next.js route
- •The model invoking
getPolicy - •A response that includes coverage details pulled from your insurance API
Real-World Use Cases
- •
Claims intake assistant
- •A customer describes an incident.
- •One agent extracts entities like date, location, and loss type.
- •Another agent calls policy and claims APIs through your Next.js route to determine next steps.
- •
Underwriting pre-check
- •An intake agent evaluates applicant data.
- •A rules agent checks eligibility.
- •A summarizer agent returns a clean recommendation for human underwriters.
- •
Policy servicing copilot
- •Users ask about endorsements, renewals, and coverage limits.
- •The UI stays in Next.js.
- •The Vercel AI SDK handles tool use against internal insurance services.
The pattern to keep is simple: let Next.js own delivery and session context, let Vercel AI SDK own model/tool orchestration, and let Python coordinate multi-agent routing when you need stronger control over workflow state. That separation keeps insurance logic testable and avoids turning your UI layer into an agent runtime.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit