How to Build a customer support Agent Using LangGraph in Python for wealth management
A customer support agent for wealth management answers client questions, routes requests to the right policy or account workflow, and escalates anything sensitive to a human advisor. It matters because every response can touch regulated advice, portfolio data, or client identity, so the agent needs more than chat—it needs control flow, traceability, and guardrails.
Architecture
- •
User intake layer
- •Accepts the client message plus metadata like
client_id,jurisdiction,channel, andrisk_tier. - •This metadata drives routing and compliance checks.
- •Accepts the client message plus metadata like
- •
Intent classifier node
- •Detects whether the request is about statements, fees, performance, tax docs, transfers, or complaints.
- •Also flags anything that looks like advice-seeking or account-sensitive.
- •
Policy and compliance gate
- •Checks whether the request can be answered automatically.
- •Blocks disallowed content like personalized investment recommendations or unverified account actions.
- •
Knowledge retrieval node
- •Pulls approved answers from internal FAQs, product docs, fee schedules, and service policies.
- •Only retrieves from whitelisted sources with versioned documents.
- •
Escalation node
- •Hands off to a human advisor or service rep when confidence is low or the request is regulated.
- •Includes the full audit trail for review.
- •
Audit and logging layer
- •Stores input, routing decisions, retrieved sources, and final response.
- •Needed for compliance reviews, dispute handling, and model governance.
Implementation
- •Define state and build a graph with explicit routing
LangGraph works well here because you can model support as a controlled workflow instead of one big prompt. For wealth management, that means every branch is visible: answer from policy, retrieve from docs, or escalate.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage
class SupportState(TypedDict):
messages: Annotated[list, add_messages]
intent: str
compliant: bool
answer: str
escalate: bool
def classify_intent(state: SupportState):
text = state["messages"][-1].content.lower()
if "buy" in text or "sell" in text or "best fund" in text:
return {"intent": "advice", "compliant": False}
if "statement" in text or "fee" in text:
return {"intent": "service", "compliant": True}
return {"intent": "unknown", "compliant": False}
def route(state: SupportState):
if not state["compliant"]:
return "escalate"
return "answer"
graph = StateGraph(SupportState)
graph.add_node("classify", classify_intent)
graph.add_node("answer", lambda state: {"answer": "Here is the approved service response.", "escalate": False})
graph.add_node("escalate", lambda state: {"answer": "Transferring you to a licensed advisor.", "escalate": True})
graph.add_edge(START, "classify")
graph.add_conditional_edges("classify", route, {
"answer": "answer",
"escalate": "escalate",
})
graph.add_edge("answer", END)
graph.add_edge("escalate", END)
app = graph.compile()
result = app.invoke({"messages": [HumanMessage(content="Can you recommend a good fund?")]})
print(result["answer"])
- •Add retrieval for approved support content
For real support flows, you do not want the model inventing fee rules or transfer policies. Use retrieval only from approved sources and keep responses grounded in those documents.
from langchain_core.documents import Document
APPROVED_DOCS = [
Document(page_content="Fee waivers apply only to accounts over $250k.", metadata={"doc_id": "fees-v3"}),
Document(page_content="Statements are available on the first business day of each month.", metadata={"doc_id": "statements-v2"}),
]
def retrieve_policy(state: SupportState):
query = state["messages"][-1].content.lower()
matches = [d for d in APPROVED_DOCS if any શબ્દ in d.page_content.lower() for word in query.split()]
if not matches:
return {"answer": "", "escalate": True}
doc = matches[0]
return {
"answer": f"Approved policy reference ({doc.metadata['doc_id']}): {doc.page_content}",
"escalate": False,
"compliant": True,
}
- •Connect compliance checks before any customer-facing answer
Wealth management has hard lines. The agent should refuse personalized investment advice unless your firm has explicitly designed a supervised advisory workflow with suitability controls.
def compliance_gate(state: SupportState):
text = state["messages"][-1].content.lower()
blocked_phrases = ["what should i buy", "best stock", "should i sell", "guaranteed return"]
if any(p in text for p in blocked_phrases):
return {"compliant": False}
return {"compliant": True}
workflow = StateGraph(SupportState)
workflow.add_node("classify", classify_intent)
workflow.add_node("gate", compliance_gate)
workflow.add_node("retrieve", retrieve_policy)
workflow.add_node("answer", lambda state: {"answer": state["answer"], "escalate": False})
workflow.add_node("escalate", lambda state: {"answer": "A licensed representative will review this request.", "escalate": True})
workflow.add_edge(START, "classify")
workflow.add_edge("classify", "gate")
workflow.add_conditional_edges(
"gate",
lambda state: "retrieve" if state["compliant"] else "escalate",
{"retrieve": "retrieve", "escalate": "escalate"},
)
workflow.add_edge("retrieve", END)
workflow.add_edge("escalate", END)
app = workflow.compile()
- •Run with audit-ready inputs
Pass structured metadata into the graph so every decision can be traced later. In production you would persist these values alongside model output and retrieval references.
from langchain_core.messages import HumanMessage
request_state = {
"messages": [HumanMessage(content="When will my monthly statement be ready?")],
"intent": "",
"compliant": False,
"answer": "",
"escalate": False,
}
output = app.invoke(request_state)
print(output["answer"])
Production Considerations
- •
Keep a full audit trail
- •Store user input, intent classification, compliance decision, retrieved document IDs, model version, and final response.
- •This is non-negotiable for disputes and regulatory review.
- •
Control data residency
- •If your clients are in different jurisdictions, pin storage and inference to approved regions.
- •Do not send PII or account data to endpoints outside your legal boundary.
- •
Add human escalation by default
- •Anything involving suitability, transfers of large assets, complaints about fiduciary duty, or account changes should route to a licensed human.
- •The graph should make escalation cheap and deterministic.
- •
Monitor refusal rates and false positives
- •A high refusal rate usually means your classifier is too aggressive.
- •A high false negative rate means risky questions are slipping through without review.
Common Pitfalls
- •
Using an unconstrained LLM as the first step
- •Bad move. You lose control over regulated content.
- •Start with deterministic classification and policy gates before generation.
- •
Letting retrieval pull from unapproved documents
- •If your vector store contains stale PDFs or marketing material, the agent will cite them.
- •Use curated corpora with versioning and document ownership.
- •
Skipping escalation context
- •A handoff that only says “transfer to human” wastes time.
- •Include intent label, compliance flags, source docs used, and conversation history so the advisor does not start blind.
- •
Treating audit logs as optional
- •In wealth management they are part of the product surface.
- •Log every branch taken by
StateGraph, everyinvoke, and every source used in the answer path.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit