How to Integrate Next.js for pension funds with Vercel AI SDK for multi-agent systems
Combining Next.js for pension funds with Vercel AI SDK gives you a clean path to build agent-driven pension workflows that are responsive, auditable, and easy to extend. The practical win is simple: your Next.js layer handles member-facing and admin-facing app flows, while the AI SDK coordinates multi-agent reasoning for tasks like document triage, contribution checks, and retirement guidance.
Prerequisites
- •Python 3.10+
- •A running Next.js application for pension funds with API routes enabled
- •Vercel AI SDK installed in your app stack
- •Access to your pension fund backend APIs
- •Environment variables configured:
- •
PENSION_API_URL - •
PENSION_API_KEY - •
OPENAI_API_KEYor another model provider key used by Vercel AI SDK
- •
- •A basic multi-agent design:
- •intake agent
- •policy/compliance agent
- •resolution agent
Integration Steps
- •
Expose pension fund actions from Next.js
Your Next.js app should expose server endpoints that the agents can call. For pension workflows, keep the surface area narrow: member lookup, contribution history, benefit estimate, and case creation.
import requests from dataclasses import dataclass @dataclass class PensionClient: base_url: str api_key: str def get_member(self, member_id: str) -> dict: resp = requests.get( f"{self.base_url}/api/members/{member_id}", headers={"Authorization": f"Bearer {self.api_key}"}, timeout=20, ) resp.raise_for_status() return resp.json() def create_case(self, payload: dict) -> dict: resp = requests.post( f"{self.base_url}/api/cases", json=payload, headers={"Authorization": f"Bearer {self.api_key}"}, timeout=20, ) resp.raise_for_status() return resp.json() - •
Wrap Vercel AI SDK agent calls behind a Python service
Vercel AI SDK is typically used in the JavaScript layer, but your Python orchestration service can still call the same model endpoint patterns through HTTP. Keep the agent contract explicit: input text in, structured JSON out.
import os import requests class AgentGateway: def __init__(self): self.api_key = os.environ["OPENAI_API_KEY"] self.model = "gpt-4o-mini" def run_agent(self, system_prompt: str, user_prompt: str) -> dict: payload = { "model": self.model, "messages": [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, ], "temperature": 0.1, } resp = requests.post( "https://api.openai.com/v1/chat/completions", headers={ "Authorization": f"Bearer {self.api_key}", "Content-Type": "application/json", }, json=payload, timeout=30, ) resp.raise_for_status() return resp.json() - •
Build a multi-agent router for pension tasks
Use one router agent to classify the request, then hand off to specialized agents. This keeps compliance logic separate from customer-facing language generation.
import json class PensionOrchestrator: def __init__(self, pension_client: PensionClient, gateway: AgentGateway): self.pension_client = pension_client self.gateway = gateway def route(self, member_id: str, request_text: str) -> dict: member = self.pension_client.get_member(member_id) classifier = self.gateway.run_agent( system_prompt=( "Classify pension support requests into one of: " "benefit_estimate, contribution_issue, document_request, complaint." ), user_prompt=f"Request: {request_text}\nMember: {json.dumps(member)}", ) route_label = classifier["choices"][0]["message"]["content"].strip() if route_label == "benefit_estimate": return self._estimate(member) if route_label == "contribution_issue": return self._investigate_contributions(member) if route_label == "document_request": return self._generate_documents_case(member) return self._create_complaint_case(member, request_text) def _estimate(self, member: dict) -> dict: prompt = f"Generate a concise retirement benefit explanation for this member:\n{json.dumps(member)}" result = self.gateway.run_agent( system_prompt="You are a pension benefits assistant.", user_prompt=prompt, ) return {"type": "estimate", "answer": result["choices"][0]["message"]["content"]} def _investigate_contributions(self, member: dict) -> dict: case = self.pension_client.create_case({ "member_id": member["id"], "category": "contribution_issue", "priority": "high", }) return {"type": "case_created", "case": case} - •
Add compliance review as a second agent
In pensions, you do not want the same agent generating advice and approving it. Run a compliance pass before returning anything to the user.
class ComplianceAgent: def __init__(self, gateway: AgentGateway): self.gateway = gateway def review(self, draft_answer: str) -> dict: result = self.gateway.run_agent( system_prompt=( "Review pension communications for regulatory risk. " "Return JSON with fields: approved(boolean), issues(array), redraft(string)." ), user_prompt=draft_answer, ) return result def safe_response(self, draft_answer: str) -> str: review = self.review(draft_answer) content = review["choices"][0]["message"]["content"] data = json.loads(content) if data["approved"]: return draft_answer return data["redraft"] - •
Connect everything through a single orchestration entry point
This is the part your Next.js frontend calls. It returns either an answer or a case reference.
from flask import Flask, request, jsonify app = Flask(__name__) pension_client = PensionClient( base_url="https://your-nextjs-app.com", api_key=os.environ["PENSION_API_KEY"], ) gateway = AgentGateway() orchestrator = PensionOrchestrator(pension_client, gateway) compliance_agent = ComplianceAgent(gateway) @app.post("/agent/pension-support") def pension_support(): payload = request.get_json() result = orchestrator.route(payload["member_id"], payload["request_text"]) if result["type"] == "estimate": safe_text = compliance_agent.safe_response(result["answer"]) return jsonify({"status": "ok", "response": safe_text}) return jsonify({"status": "ok", **result}) if __name__ == "__main__": app.run(port=8000)
Testing the Integration
Use a simple smoke test to verify routing, API access, and response shaping.
import requests
resp = requests.post(
"http://localhost:8000/agent/pension-support",
json={
"member_id": "M12345",
"request_text": "Can you explain my projected retirement benefit?"
},
timeout=30,
)
print(resp.status_code)
print(resp.json())
Expected output:
{
"status": "ok",
"response": "...pension benefit explanation..."
}
If the request triggers a case instead of an answer:
{
"status": "ok",
"type": "case_created",
"case": {
"id": "CASE-90871",
"category": "contribution_issue",
"...": "..."
}
}
Real-World Use Cases
- •
Member support copilot
Answer benefit questions, explain contribution gaps, and open cases when policy or account data needs human review. - •
Compliance-first document handling
Route uploaded forms through an intake agent and a compliance agent before anything is sent back to members or operations teams. - •
Retirement planning assistant
Combine account data from Next.js endpoints with AI-generated summaries for projected income scenarios and next-step recommendations.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit