How to Integrate OpenAI for retail banking with AWS Lambda for AI agents
Combining OpenAI for retail banking with AWS Lambda gives you a clean pattern for building banking AI agents that can answer customer questions, classify requests, and trigger backend actions without running a full server. Lambda handles the event-driven execution, while OpenAI handles language understanding and response generation.
This is a strong fit for retail banking workflows like balance inquiries, card dispute triage, loan pre-qualification, and customer support routing. You keep the agent stateless, auditable, and cheap to run.
Prerequisites
- •An AWS account with:
- •IAM permissions to create Lambda functions
- •CloudWatch Logs access
- •AWS CLI configured locally:
- •
aws configure
- •
- •Python 3.10+
- •An OpenAI API key stored as an environment variable:
- •
export OPENAI_API_KEY=...
- •
- •
boto3installed for AWS Lambda calls:- •
pip install boto3
- •
- •
openaiPython SDK installed:- •
pip install openai
- •
- •A deployed Lambda function or local Lambda handler file
- •Basic familiarity with:
- •AWS Lambda event payloads
- •JSON request/response formats
Integration Steps
- •Create a Lambda handler that receives the banking request
Start with a simple Lambda function that accepts an event from API Gateway or another agent orchestrator. The handler extracts the user message and passes it into your OpenAI workflow.
import json
import os
def lambda_handler(event, context):
body = event.get("body", "{}")
payload = json.loads(body) if isinstance(body, str) else body
user_message = payload.get("message", "")
customer_id = payload.get("customer_id", "")
return {
"statusCode": 200,
"body": json.dumps({
"customer_id": customer_id,
"message_received": user_message
})
}
- •Call OpenAI from inside the Lambda function
Use the OpenAI Python SDK inside your Lambda runtime to generate an intent classification or customer response. For retail banking agents, this is where you identify whether the request is about balances, disputes, payments, or product info.
import json
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def lambda_handler(event, context):
body = json.loads(event.get("body", "{}"))
user_message = body.get("message", "")
response = client.responses.create(
model="gpt-4.1-mini",
input=f"""
You are a retail banking assistant.
Classify this message into one of:
balance_inquiry, card_dispute, payment_issue, loan_question, general_support.
Message: {user_message}
Return only the label.
"""
)
intent = response.output_text.strip()
return {
"statusCode": 200,
"body": json.dumps({
"intent": intent
})
}
- •Trigger downstream banking actions with AWS SDK calls
Once you have the intent, use boto3 to call other AWS services or invoke another Lambda function that talks to core banking systems. This keeps your AI layer separate from business logic.
import json
import os
import boto3
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
lambda_client = boto3.client("lambda")
def classify_intent(message: str) -> str:
response = client.responses.create(
model="gpt-4.1-mini",
input=f"Classify this retail banking request: {message}"
)
return response.output_text.strip()
def lambda_handler(event, context):
body = json.loads(event.get("body", "{}"))
message = body.get("message", "")
intent = classify_intent(message)
if intent == "card_dispute":
downstream_payload = {
"customer_id": body.get("customer_id"),
"issue": message,
"intent": intent
}
result = lambda_client.invoke(
FunctionName=os.environ["DISPUTE_TRIAGE_LAMBDA"],
InvocationType="RequestResponse",
Payload=json.dumps(downstream_payload).encode("utf-8")
)
downstream_response = json.loads(result["Payload"].read())
return {
"statusCode": 200,
"body": json.dumps({
"intent": intent,
"result": downstream_response
})
}
return {
"statusCode": 200,
"body": json.dumps({"intent": intent})
}
- •Generate a customer-facing answer from structured data
In production, don’t let the model invent account data. Fetch structured facts from your backend first, then ask OpenAI to turn them into a clear response.
import json
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def build_response(customer_name: str, account_balance: float, available_credit: float) -> str:
prompt = f"""
Write a short retail banking response.
Customer name: {customer_name}
Checking balance: ${account_balance:.2f}
Available credit: ${available_credit:.2f}
Rules:
- Be concise
- Do not mention internal systems
- Do not invent any additional account details
"""
resp = client.responses.create(
model="gpt-4.1-mini",
input=prompt
)
return resp.output_text.strip()
def lambda_handler(event, context):
body = json.loads(event.get("body", "{}"))
reply = build_response(
customer_name=body.get("customer_name", "Customer"),
account_balance=float(body.get("account_balance", 0)),
available_credit=float(body.get("available_credit", 0))
)
return {
"statusCode": 200,
"body": json.dumps({"reply": reply})
}
- •Package and deploy the Lambda function
Deploy with environment variables for secrets and function routing. Keep the OpenAI key in AWS Secrets Manager in real environments; use env vars only for local testing.
zip function.zip app.py
aws lambda create-function \
--function-name retail-banking-agent \
--runtime python3.10 \
--handler app.lambda_handler \
--role arn:aws:iam::123456789012:role/lambda-execution-role \
--zip-file fileb://function.zip \
--environment Variables="{OPENAI_API_KEY=your-key-here,DISPUTE_TRIAGE_LAMBDA=dispute-triage-handler}"
Testing the Integration
Use a direct Lambda invocation to verify that the function can classify a request and produce a valid response path.
import json
import boto3
lambda_client = boto3.client("lambda")
test_event = {
"body": json.dumps({
"customer_id": "cust_1001",
"message": "I see an unfamiliar debit card charge from yesterday"
})
}
response = lambda_client.invoke(
FunctionName="retail-banking-agent",
InvocationType="RequestResponse",
Payload=json.dumps(test_event).encode("utf-8")
)
result = json.loads(response["Payload"].read())
print(result)
Expected output:
{
"statusCode": 200,
"body": "{\"intent\": \"card_dispute\", \"result\": {...}}"
}
If you’re testing locally with sam local invoke, you should see the same intent classification and a valid JSON body.
Real-World Use Cases
- •
Card dispute triage
- •Classify complaint text with OpenAI.
- •Use Lambda to open a case in your disputes workflow.
- •
Loan pre-screening
- •Collect applicant details through an agent.
- •Run eligibility checks in Lambda and generate next-step guidance with OpenAI.
- •
Account servicing assistant
- •Answer balance or fee questions using structured account data.
- •Route complex cases to human support when confidence is low.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit