How to Integrate Azure OpenAI for pension funds with CosmosDB for multi-agent systems
Why this integration matters
Pension fund workflows are document-heavy, policy-driven, and full of multi-step decisions. Pairing Azure OpenAI with Cosmos DB gives you a clean pattern for building agents that can read member context, retrieve plan data, draft responses, and persist every decision trail in a low-latency store.
For multi-agent systems, this combo is practical: one agent can classify the request, another can fetch pension records from Cosmos DB, and a third can generate a compliant response with Azure OpenAI. You get stateful orchestration without stuffing everything into prompts.
Prerequisites
- •An Azure subscription
- •An Azure OpenAI resource with:
- •
endpoint - •
api_key - •a deployed chat model name like
gpt-4o-mini
- •
- •An Azure Cosmos DB account
- •SQL API enabled
- •database and container created
- •Python 3.10+
- •Install these packages:
- •
openai - •
azure-cosmos - •
python-dotenv
- •
- •A
.envfile with:- •
AZURE_OPENAI_ENDPOINT - •
AZURE_OPENAI_API_KEY - •
AZURE_OPENAI_API_VERSION - •
AZURE_OPENAI_DEPLOYMENT_NAME - •
COSMOS_ENDPOINT - •
COSMOS_KEY - •
COSMOS_DATABASE_NAME - •
COSMOS_CONTAINER_NAME
- •
Integration Steps
1) Install dependencies and load configuration
Keep secrets out of code. In production, use Key Vault or managed identity, but .env is fine for local development.
pip install openai azure-cosmos python-dotenv
import os
from dotenv import load_dotenv
load_dotenv()
AZURE_OPENAI_ENDPOINT = os.environ["AZURE_OPENAI_ENDPOINT"]
AZURE_OPENAI_API_KEY = os.environ["AZURE_OPENAI_API_KEY"]
AZURE_OPENAI_API_VERSION = os.environ["AZURE_OPENAI_API_VERSION"]
AZURE_OPENAI_DEPLOYMENT_NAME = os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"]
COSMOS_ENDPOINT = os.environ["COSMOS_ENDPOINT"]
COSMOS_KEY = os.environ["COSMOS_KEY"]
COSMOS_DATABASE_NAME = os.environ["COSMOS_DATABASE_NAME"]
COSMOS_CONTAINER_NAME = os.environ["COSMOS_CONTAINER_NAME"]
2) Create clients for Azure OpenAI and Cosmos DB
Use the official SDKs. For Azure OpenAI, the current Python SDK uses AzureOpenAI. For Cosmos DB SQL API, use CosmosClient.
from openai import AzureOpenAI
from azure.cosmos import CosmosClient
aoai_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
api_version=AZURE_OPENAI_API_VERSION,
azure_endpoint=AZURE_OPENAI_ENDPOINT,
)
cosmos_client = CosmosClient(COSMOS_ENDPOINT, credential=COSMOS_KEY)
database = cosmos_client.get_database_client(COSMOS_DATABASE_NAME)
container = database.get_container_client(COSMOS_CONTAINER_NAME)
3) Read pension context from Cosmos DB
In a multi-agent setup, your retrieval agent should pull structured state from Cosmos DB before calling the model. Keep each item small and queryable.
def get_member_profile(member_id: str) -> dict | None:
query = """
SELECT * FROM c
WHERE c.memberId = @memberId
"""
items = list(
container.query_items(
query=query,
parameters=[{"name": "@memberId", "value": member_id}],
enable_cross_partition_query=True,
)
)
return items[0] if items else None
member_profile = get_member_profile("MEM-10291")
print(member_profile)
A typical document might include:
- •member status
- •contribution history
- •retirement date estimate
- •policy flags
- •prior agent actions
That gives your LLM enough context to draft answers without guessing.
4) Generate an agent response with Azure OpenAI
Now pass the retrieved pension context into the model. Use system instructions to constrain behavior around compliance and tone.
def draft_pension_response(member_profile: dict, user_question: str) -> str:
messages = [
{
"role": "system",
"content": (
"You are a pension operations assistant. "
"Answer only from provided member data and policy context. "
"If data is missing, say what is missing and what to do next."
),
},
{
"role": "user",
"content": f"""
Member profile:
{member_profile}
Question:
{user_question}
""",
},
]
response = aoai_client.chat.completions.create(
model=AZURE_OPENAI_DEPLOYMENT_NAME,
messages=messages,
temperature=0.2,
)
return response.choices[0].message.content
answer = draft_pension_response(
member_profile,
"Can this member request a partial transfer without triggering an exception?"
)
print(answer)
For pensions work, keep temperature low. You want deterministic outputs that are easier to audit.
5) Persist the agent trace back into Cosmos DB
Multi-agent systems need memory. Store the input context, model output, timestamps, and correlation IDs so you can replay decisions later.
from datetime import datetime, timezone
def save_agent_trace(member_id: str, question: str, answer: str) -> None:
trace_doc = {
"id": f"trace-{member_id}-{datetime.now(timezone.utc).timestamp()}",
"type": "agent-trace",
"memberId": member_id,
"question": question,
"answer": answer,
"createdAt": datetime.now(timezone.utc).isoformat(),
"source": "azure-openai-cosmos-multi-agent",
}
container.upsert_item(trace_doc)
question_text = "Can this member request a partial transfer without triggering an exception?"
save_agent_trace("MEM-10291", question_text, answer)
This pattern gives you:
- •conversation memory for downstream agents
- •auditability for compliance teams
- •replayable traces for debugging bad outputs
Testing the Integration
Run a simple end-to-end check: fetch one member record, generate an answer, then store it.
def test_pipeline(member_id: str):
profile = get_member_profile(member_id)
if not profile:
print(f"No profile found for {member_id}")
return
question = "Summarize the next action for this pension case."
response_text = draft_pension_response(profile, question)
save_agent_trace(member_id, question, response_text)
print("PROFILE OK:", profile["memberId"])
print("RESPONSE OK:", response_text[:200])
print("TRACE SAVED")
test_pipeline("MEM-10291")
Expected output:
PROFILE OK: MEM-10291
RESPONSE OK: The next action is to verify...
TRACE SAVED
If this fails:
- •confirm your deployment name matches the Azure OpenAI deployment exactly
- •confirm your Cosmos partition key aligns with how documents are stored
- •confirm the container has at least one document for that member ID
Real-World Use Cases
- •
Member service copilot
- •One agent retrieves pension records from Cosmos DB.
- •Another drafts responses about contributions, eligibility, or transfers using Azure OpenAI.
- •A final agent logs the interaction for audit.
- •
Claims and exceptions triage
- •Route incoming cases to specialized agents.
- •Store case state in Cosmos DB.
- •Use Azure OpenAI to summarize exceptions and recommend next steps based on policy data.
- •
Compliance-aware case summarization
- •Pull transaction history and policy metadata from Cosmos DB.
- •Generate concise summaries for human reviewers.
- •Persist every summary version as an immutable trace.
This architecture works because each tool does one job well. Cosmos DB holds structured state at scale; Azure OpenAI handles reasoning over that state; your orchestrator keeps agents honest and traceable.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit