How to Integrate OpenAI for payments with Pinecone for production AI
Combining OpenAI with Pinecone gives you a clean production pattern for agent systems that need both reasoning and retrieval. In practice, this is what powers payment-aware support bots, invoice assistants, dispute triage, and policy lookup agents that can answer from your own indexed data instead of guessing.
The useful part is not “chat + vector search” in isolation. It’s building an agent that can inspect payment-related context, retrieve the right customer or transaction history from Pinecone, and use OpenAI to generate a controlled response or next action.
Prerequisites
- •Python 3.10+
- •An OpenAI API key with access to the models you plan to use
- •A Pinecone account and API key
- •A Pinecone index created with the correct embedding dimension
- •
pip install openai pinecone - •A
.envfile or secret manager for:- •
OPENAI_API_KEY - •
PINECONE_API_KEY - •
PINECONE_INDEX_NAME
- •
Integration Steps
- •Install the SDKs and initialize clients
Start by wiring both clients in one place. Keep this in a small clients.py module so your app code never touches raw environment variables directly.
import os
from openai import OpenAI
from pinecone import Pinecone
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
pinecone = Pinecone(api_key=os.environ["PINECONE_API_KEY"])
index_name = os.environ["PINECONE_INDEX_NAME"]
index = pinecone.Index(index_name)
- •Create embeddings with OpenAI for payment-related text
For production AI, store structured payment events as searchable text. That includes invoice notes, dispute summaries, refund reasons, KYC flags, and support transcripts.
from openai import OpenAI
client = OpenAI()
payment_texts = [
"Customer disputed card charge on invoice INV-1042 for subscription renewal.",
"Refund approved after duplicate ACH transfer on account ACCT-9921.",
"Payment failed due to expired card; retry scheduled in 24 hours."
]
embeddings_response = client.embeddings.create(
model="text-embedding-3-small",
input=payment_texts
)
vectors = embeddings_response.data
print(len(vectors), len(vectors[0].embedding))
This gives you dense vectors you can store in Pinecone alongside metadata like customer ID, invoice ID, status, and timestamps.
- •Upsert vectors into Pinecone
Use stable IDs and metadata that your agent can filter on later. For payments workflows, metadata matters as much as similarity because compliance and routing usually depend on account state.
from pinecone import Pinecone
pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"])
idx = pc.Index(os.environ["PINECONE_INDEX_NAME"])
upserts = []
for i, item in enumerate(payment_texts):
upserts.append({
"id": f"payment-{i}",
"values": embeddings_response.data[i].embedding,
"metadata": {
"source": "payments",
"text": item,
"customer_id": f"CUST-{1000 + i}",
"category": "dispute" if i == 0 else "refund" if i == 1 else "failure"
}
})
idx.upsert(vectors=upserts)
If your index was created with metadata filtering enabled, you can later query only disputes or only records for a specific customer segment.
- •Query Pinecone from an agent request
When a user asks about a payment issue, embed the question with OpenAI, then query Pinecone for the closest matches. This is the retrieval step that grounds your agent in your own data.
query = "Why was my renewal charge disputed last week?"
query_embedding = client.embeddings.create(
model="text-embedding-3-small",
input=[query]
).data[0].embedding
results = idx.query(
vector=query_embedding,
top_k=3,
include_metadata=True,
filter={"source": {"$eq": "payments"}}
)
for match in results["matches"]:
print(match["id"], match["score"], match["metadata"]["text"])
In production, don’t send raw vector search results directly to users. Pass them into your LLM prompt as retrieved context.
- •Generate the final response with OpenAI using retrieved context
Now combine the retrieved payment records with a constrained generation step. This is where OpenAI turns search hits into a useful answer or action recommendation.
context_lines = [
f"- {m['metadata']['text']} (customer_id={m['metadata']['customer_id']}, category={m['metadata']['category']})"
for m in results["matches"]
]
prompt = f"""
You are a payments support assistant.
Answer using only the retrieved context below.
If the context is insufficient, say what is missing.
User question: {query}
Retrieved context:
{chr(10).join(context_lines)}
"""
response = client.responses.create(
model="gpt-4o-mini",
input=prompt
)
print(response.output_text)
Testing the Integration
Run one end-to-end check: embed a known payment issue, retrieve it from Pinecone, then generate a response from OpenAI.
test_query = "What happened with the duplicate transfer refund?"
q_emb = client.embeddings.create(
model="text-embedding-3-small",
input=[test_query]
).data[0].embedding
hits = idx.query(vector=q_emb, top_k=1, include_metadata=True)
assert len(hits["matches"]) > 0
ctx = hits["matches"][0]["metadata"]["text"]
result = client.responses.create(
model="gpt-4o-mini",
input=f"Answer based on this context only: {ctx}\n\nQuestion: {test_query}"
)
print(result.output_text)
Expected output:
Refund approved after duplicate ACH transfer on account ACCT-9921.
If you get an empty result set or irrelevant matches:
- •Check embedding model consistency between upsert and query
- •Confirm the Pinecone index dimension matches the embedding size
- •Verify metadata filters are not excluding valid records
Real-World Use Cases
- •Payment support agents that retrieve transaction history and explain failed charges without exposing unrelated customer data.
- •Dispute triage systems that classify chargebacks by similarity to prior cases stored in Pinecone.
- •Invoice assistants that answer billing questions from indexed contract terms, past invoices, and payment events.
The production pattern here is simple: OpenAI handles language and embeddings, Pinecone handles retrieval at scale. Keep those responsibilities separate, enforce metadata filters aggressively, and your agent system stays predictable under real payment workloads.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit