How to Build a customer support Agent Using AutoGen in Python for lending
A lending support agent handles borrower questions, payment status, payoff quotes, application status, hardship options, and document requests without forcing a human to answer every routine ticket. In lending, that matters because every response can affect compliance, customer trust, and downstream decisions, so the agent needs strong guardrails, traceability, and clean handoff paths.
Architecture
- •User-facing support agent
- •Receives borrower questions and decides whether to answer directly or escalate.
- •Policy/Compliance checker
- •Enforces lending rules: no unauthorized credit advice, no promise of approval, no disclosure of sensitive data.
- •Loan servicing tools
- •Reads payment history, due dates, payoff balances, application status, and case notes from internal systems.
- •Escalation agent
- •Routes ambiguous, complaint-heavy, or regulated requests to a human support queue.
- •Audit logger
- •Stores prompts, tool calls, responses, and escalation reasons for review.
- •Data access boundary
- •Restricts what the agent can see based on region, role, and customer consent.
Implementation
- •Set up AutoGen agents with explicit roles
Use AssistantAgent for the borrower-facing assistant and a second AssistantAgent as a compliance reviewer. In lending workflows, keep the assistant narrow: it should explain policy and fetch facts, not invent outcomes.
from autogen import AssistantAgent
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": "YOUR_OPENAI_API_KEY",
}
],
"temperature": 0,
}
support_agent = AssistantAgent(
name="lending_support_agent",
llm_config=llm_config,
system_message=(
"You are a lending customer support agent. "
"Answer only using verified loan servicing facts. "
"Never promise approval, waive fees, or provide legal advice. "
"Escalate complaints, disputes, hardship requests, fraud claims, "
"and any request involving sensitive personal data."
),
)
compliance_agent = AssistantAgent(
name="lending_compliance_agent",
llm_config=llm_config,
system_message=(
"You are a compliance reviewer for lending support responses. "
"Reject any response that includes unverified claims, prohibited advice, "
"or disclosures beyond the approved policy."
),
)
- •Add tools for loan servicing lookups
AutoGen supports tool use through regular Python functions exposed to the agent. Keep these functions deterministic and narrow; they should return structured data from your backend systems.
from typing import Dict
def get_loan_status(loan_id: str) -> Dict[str, str]:
# Replace with real service call
return {
"loan_id": loan_id,
"status": "current",
"next_due_date": "2026-05-15",
"amount_due": "$428.11",
"payoff_quote": "$18_240.77",
"servicing_region": "US"
}
def get_case_notes(customer_id: str) -> Dict[str, str]:
# Replace with real service call
return {
"customer_id": customer_id,
"open_cases": "0",
"last_contact_reason": "billing question"
}
- •Wire in a conversation loop with compliance review
A practical pattern is: user message → support agent drafts answer → compliance agent reviews → either send answer or escalate. This keeps the final response auditable and prevents policy drift.
from autogen import UserProxyAgent
user_proxy = UserProxyAgent(
name="borrower_proxy",
human_input_mode="NEVER",
code_execution_config=False,
)
def answer_borrower(question: str) -> str:
draft = support_agent.generate_reply(messages=[{"role": "user", "content": question}])
review_prompt = f"""
Review this lending support draft for compliance.
Draft:
{draft}
Rules:
- No promises of approval
- No legal or credit advice
- No disclosure of sensitive data
- Escalate disputes/hardship/fraud/complaints
Return either APPROVED: <final text> or REJECTED: <reason>
"""
review = compliance_agent.generate_reply(messages=[{"role": "user", "content": review_prompt}])
if isinstance(review, str) and review.startswith("APPROVED:"):
return review.replace("APPROVED:", "").strip()
return (
"I’m routing this to a human specialist because it requires manual review "
"(for example: dispute handling, hardship options, or account-specific exceptions)."
)
print(answer_borrower("What is my payoff amount for loan L12345?"))
- •Use group chat when you need multi-agent orchestration
For more complex flows—like verifying an account issue before answering—use GroupChat and GroupChatManager. That gives you a clean pattern for role separation without stuffing everything into one prompt.
from autogen import GroupChat, GroupChatManager
groupchat = GroupChat(
agents=[support_agent, compliance_agent],
messages=[],
max_round=4,
)
manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)
Production Considerations
- •
Deployment
- •Keep the model behind your internal API gateway.
- •Route by geography so customer data stays in-region when required by residency rules.
- •Separate production servicing credentials from support-agent runtime credentials.
- •
Monitoring
- •Log every tool call with loan ID hash, timestamp, decision path, and escalation reason.
- •Track refusal rate on regulated intents like disputes and hardship requests.
- •Sample conversations weekly for compliance QA.
- •
Guardrails
- •Block free-text access to raw PII unless the user is authenticated and authorized.
- •Hard-code disallowed intents: adverse action explanations beyond policy text set by legal/compliance.
- •Require human approval for fee waivers, repayment plan changes, fraud claims, and complaint resolution.
- •
Auditability
- •Store prompt versioning alongside model versioning.
- •Persist both draft and approved responses so reviewers can reconstruct what happened later.
Common Pitfalls
- •
Letting the model improvise on account facts
- •Avoid this by forcing all balance/status/payoff answers through backend tools like
get_loan_status. - •Never let the agent “estimate” amounts due or payoff quotes.
- •Avoid this by forcing all balance/status/payoff answers through backend tools like
- •
Skipping escalation logic for regulated cases
- •Hardship requests, disputes, fraud reports, complaints about collections practices need human handling.
- •Put those intents behind explicit routing rules before the model writes a customer-facing reply.
- •
Ignoring residency and retention requirements
- •If your lending stack serves multiple jurisdictions, don’t send all conversations to one global logging bucket.
- •Partition logs by region and set retention policies that match your legal hold and privacy requirements.
- •
Using one prompt for everything
- •Support drafting and compliance review are different jobs.
- •Split them into separate agents so you can test policy behavior independently and update rules without breaking customer tone.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit