How to Integrate Next.js for fintech with Vercel AI SDK for AI agents
Why this integration matters
If you are building AI agents for fintech, you need two things: a fast app layer for customer-facing workflows and a reliable agent runtime for reasoning, tool use, and streaming responses. Next.js for fintech gives you the web surface and server routes; Vercel AI SDK gives you the primitives to stream, orchestrate, and expose agent behavior cleanly.
The useful pattern is simple: Next.js handles the product shell and API boundaries, while your Python agent service uses the Vercel AI SDK to generate responses, call tools, and return structured output back into your fintech app.
Prerequisites
- •Node.js 18+ and npm installed
- •Python 3.10+ installed
- •A Next.js app already created for your fintech product
- •A Vercel project connected to that app
- •Access to an LLM provider supported by Vercel AI SDK
- •Basic familiarity with:
- •
NextResponsein Next.js route handlers - •
streamText()fromai - •
generateText()fromai
- •
- •Environment variables configured:
- •
OPENAI_API_KEYor equivalent provider key - •
VERCEL_PROJECT_PRODUCTION_URLif you are calling deployed endpoints from Python
- •
Integration Steps
1) Create a Next.js API route that exposes a fintech-safe agent endpoint
Start with a route handler in your Next.js app. This is where your fintech UI will send customer context like account type, intent, or transaction metadata.
# app/api/agent/route.py
from typing import Any, Dict
import json
# This file is conceptual Python-like pseudocode for the integration flow.
# In a real Next.js app, this route is implemented in TypeScript/JavaScript,
# but the request/response contract below is what your Python agent will call.
def post(request_body: Dict[str, Any]) -> Dict[str, Any]:
user_message = request_body.get("message", "")
customer_id = request_body.get("customerId", "")
risk_band = request_body.get("riskBand", "standard")
return {
"customerId": customer_id,
"riskBand": risk_band,
"message": user_message,
"status": "accepted"
}
Your production route should validate input before it reaches any model call. For fintech, do not pass raw PII unless you have a clear redaction policy.
2) Build the agent service in Python using Vercel AI SDK-compatible contracts
If your orchestration layer is in Python, keep it thin and let it call the Next.js endpoint as an internal tool. The key is to preserve a stable JSON contract between the two systems.
import os
import requests
from typing import Dict, Any
NEXTJS_AGENT_URL = os.environ["NEXTJS_AGENT_URL"]
def send_customer_context(message: str, customer_id: str, risk_band: str) -> Dict[str, Any]:
payload = {
"message": message,
"customerId": customer_id,
"riskBand": risk_band,
}
response = requests.post(
f"{NEXTJS_AGENT_URL}/api/agent",
json=payload,
timeout=15,
)
response.raise_for_status()
return response.json()
This pattern works well when Next.js owns session state and auth while Python owns longer-running agent workflows. It also keeps your model-facing code away from browser code.
3) Add Vercel AI SDK streaming on the Next.js side
In the Next.js route that actually talks to the model, use streamText() so your fintech UI can render partial answers quickly. That matters for support agents, underwriting assistants, and payment investigation flows.
import os
from typing import Dict, Any
# Conceptual representation of a Vercel AI SDK-backed route.
# The real implementation uses TypeScript:
# import { streamText } from 'ai'
# import { openai } from '@ai-sdk/openai'
def build_agent_prompt(data: Dict[str, Any]) -> str:
return (
f"You are a fintech support agent.\n"
f"Customer ID: {data['customerId']}\n"
f"Risk band: {data['riskBand']}\n"
f"User message: {data['message']}\n"
f"Respond with concise next steps and no sensitive data."
)
def stream_text(prompt: str) -> str:
# Placeholder for Vercel AI SDK streamText() behavior.
# In production this streams tokens back to the client.
return prompt
The actual SDK call you want in your Next.js handler is:
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
const result = streamText({
model: openai('gpt-4o-mini'),
prompt,
})
That gives you server-side streaming without hand-rolling SSE plumbing.
4) Call the Next.js endpoint from Python as an internal tool inside your agent loop
Now wire the Python orchestration layer so it can invoke the Next.js route when it needs frontend-aligned context. This is useful when your agent decides whether to escalate, summarize, or fetch more account metadata.
import os
import requests
from typing import Dict, Any
NEXTJS_BASE_URL = os.environ["NEXTJS_BASE_URL"]
def get_agent_reply(message: str) -> Dict[str, Any]:
payload = {
"message": message,
"customerId": "cus_10293",
"riskBand": "high",
}
res = requests.post(
f"{NEXTJS_BASE_URL}/api/agent",
json=payload,
timeout=20,
)
res.raise_for_status()
return res.json()
if __name__ == "__main__":
result = get_agent_reply("Show me why my transfer was flagged.")
print(result)
For production use:
- •add retries with exponential backoff
- •sign requests between services
- •log correlation IDs end-to-end
- •redact account numbers before sending anything to model-facing routes
5) Return structured outputs that your fintech UI can trust
Do not let the model free-write everything. Have it emit structured fields like decision, reason_code, and next_action. That makes downstream rendering in Next.js deterministic.
from pydantic import BaseModel
from typing import Literal
class AgentResponse(BaseModel):
decision: Literal["approve", "review", "escalate"]
reason_code: str
next_action: str
def normalize_response(raw: dict) -> AgentResponse:
return AgentResponse(
decision=raw.get("decision", "review"),
reason_code=raw.get("reason_code", "UNKNOWN"),
next_action=raw.get("next_action", "Ask customer for more context."),
)
This is the difference between an assistant demo and something you can ship in regulated workflows.
Testing the Integration
Use a simple smoke test that calls your Next.js endpoint through Python and checks for expected fields.
import os
import requests
BASE_URL = os.environ["NEXTJS_BASE_URL"]
payload = {
"message": "Why was my card payment declined?",
"customerId": "cus_10293",
"riskBand": "standard",
}
response = requests.post(f"{BASE_URL}/api/agent", json=payload)
response.raise_for_status()
data = response.json()
print(data)
assert data["status"] == "accepted"
assert data["customerId"] == "cus_10293"
Expected output:
{'customerId': 'cus_10293', 'riskBand': 'standard', 'message': 'Why was my card payment declined?', 'status': 'accepted'}
If you have wired streaming correctly on the Next.js side with streamText(), your UI should start rendering partial tokens before the full completion arrives.
Real-World Use Cases
- •Customer support copilot that explains declines, chargebacks, and payment holds without exposing sensitive ledger details.
- •Underwriting assistant that summarizes applicant context and routes borderline cases to human review.
- •Fraud operations workflow where an agent classifies alerts and returns structured escalation actions into a Next.js dashboard.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit