How to Integrate OpenAI for lending with AWS Lambda for multi-agent systems
Combining OpenAI for lending with AWS Lambda gives you a clean way to run agentic loan workflows without keeping servers alive. You can split work across small Lambda functions: one agent extracts borrower data, another checks policy rules, and a third drafts a decision memo or next action.
This is the pattern I use when I want multi-agent systems that are event-driven, auditable, and cheap to operate. OpenAI handles the reasoning layer; Lambda handles orchestration, isolation, and retries.
Prerequisites
- •AWS account with permission to create:
- •Lambda functions
- •IAM roles
- •CloudWatch logs
- •Python 3.11 locally
- •AWS CLI configured with
aws configure - •An OpenAI API key stored as an environment variable:
- •
OPENAI_API_KEY
- •
- •
boto3installed for AWS SDK access - •
openaiPython SDK installed - •A lending workflow with at least one agent task defined, such as:
- •document extraction
- •borrower risk summary
- •policy-based recommendation
Install the dependencies:
pip install openai boto3
Integration Steps
1) Create a Lambda handler that calls OpenAI for lending
Start with a Lambda function that receives an application payload and asks OpenAI to extract structured lending fields. Keep the prompt narrow so the output is predictable.
import json
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def lambda_handler(event, context):
application_text = event["application_text"]
response = client.responses.create(
model="gpt-4.1-mini",
input=[
{
"role": "system",
"content": (
"You are a lending analyst. Extract borrower name, income, "
"monthly debt, requested amount, and risk flags as JSON."
),
},
{"role": "user", "content": application_text},
],
)
return {
"statusCode": 200,
"body": json.dumps({
"result": response.output_text
}),
}
This is the core call you want in the first agent. The important method is client.responses.create(...), which is the current OpenAI API path for structured reasoning and generation.
2) Package the Lambda function and attach permissions
Your Lambda needs permission to write logs and read any secrets or queue messages you use later in the workflow. For a minimal setup, give it basic execution rights.
import boto3
iam = boto3.client("iam")
role_name = "lending-agent-lambda-role"
policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
role = iam.create_role(
RoleName=role_name,
AssumeRolePolicyDocument=json.dumps(policy_document)
)
iam.attach_role_policy(
RoleName=role_name,
PolicyArn="arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
)
In production, add tighter permissions for Secrets Manager or SQS if your multi-agent flow uses them. Don’t give the function broad access just because it’s convenient.
3) Deploy the Lambda function from Python using boto3
Use boto3.client("lambda") to deploy the code artifact and wire in environment variables. This keeps your deployment script close to your application code.
import base64
import boto3
lambda_client = boto3.client("lambda")
with open("lambda_function.zip", "rb") as f:
zip_bytes = f.read()
response = lambda_client.create_function(
FunctionName="lending-openai-agent",
Runtime="python3.11",
Role="arn:aws:iam::123456789012:role/lending-agent-lambda-role",
Handler="lambda_function.lambda_handler",
Code={"ZipFile": zip_bytes},
Timeout=30,
MemorySize=512,
Environment={
"Variables": {
"OPENAI_API_KEY": os.environ["OPENAI_API_KEY"]
}
},
)
print(response["FunctionArn"])
If you already have the function created, use update_function_code() instead of create_function(). That’s the safer path for iterative development.
4) Add a second agent step with AWS Lambda orchestration
Multi-agent systems work better when each Lambda does one thing well. A common pattern is to have one Lambda call OpenAI for extraction and another Lambda do policy validation or routing.
import json
import boto3
lambda_client = boto3.client("lambda")
def invoke_policy_agent(extracted_json):
payload = {
"extracted_application": extracted_json,
"policy_version": "2026-01"
}
resp = lambda_client.invoke(
FunctionName="lending-policy-agent",
InvocationType="RequestResponse",
Payload=json.dumps(payload).encode("utf-8"),
)
return json.loads(resp["Payload"].read())
def lambda_handler(event, context):
extracted = event["extracted_application"]
policy_result = invoke_policy_agent(extracted)
return {
"statusCode": 200,
"body": json.dumps({
"extracted_application": extracted,
"policy_result": policy_result
}),
}
This is where AWS Lambda earns its keep. Each agent can be independently deployed, versioned, retried, and monitored without turning your app into a monolith.
5) Return a final lending decision from the last agent
Use OpenAI again if you want a concise explanation layer on top of deterministic policy results. The model should not invent approvals; it should summarize what downstream agents already decided.
import json
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def lambda_handler(event, context):
policy_result = event["policy_result"]
prompt = f"""
Summarize this lending decision in plain English for an internal reviewer.
Use only these facts:
{json.dumps(policy_result)}
"""
response = client.responses.create(
model="gpt-4.1-mini",
input=prompt,
)
return {
"statusCode": 200,
"body": json.dumps({
"decision_summary": response.output_text,
"policy_result": policy_result,
}),
}
Keep decision authority in rules or human review if you’re operating in regulated lending. Use OpenAI to explain outcomes and route cases, not to silently approve credit on its own.
Testing the Integration
Invoke your deployed Lambda with a sample loan application payload and verify that it returns extracted fields plus a downstream summary.
import json
import boto3
lambda_client = boto3.client("lambda")
test_event = {
"application_text": (
"Borrower: Jane Doe. Monthly income: $8,500. "
"Monthly debt: $2,100. Requested amount: $25,000. "
"Employment: full-time software engineer."
)
}
response = lambda_client.invoke(
FunctionName="lending-openai-agent",
InvocationType="RequestResponse",
Payload=json.dumps(test_event).encode("utf-8"),
)
result = json.loads(response["Payload"].read())
print(json.dumps(result, indent=2))
Expected output:
{
"statusCode": 200,
"body": "{\"result\": \"{\\\"borrower_name\\\":\\\"Jane Doe\\\",\\\"income\\\":8500,...}\"}"
}
If you see valid JSON-like extraction text coming back from OpenAI through Lambda, the integration is working end to end.
Real-World Use Cases
- •
Loan intake triage
- •One agent extracts application data from emails or PDFs.
- •Another validates against underwriting rules.
- •A final agent drafts a reviewer note for ops staff.
- •
Exception handling
- •Route borderline cases to a human review queue.
- •Use OpenAI to summarize missing documents or conflicting fields.
- •Use Lambda to fan out tasks across multiple agents.
- •
Portfolio monitoring
- •Trigger Lambdas on delinquency events.
- •Have agents summarize account history and recommend next actions.
- •Push outputs into Slack, Jira, or your case management system.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit