How to Build a loan approval Agent Using LangChain in Python for wealth management
A loan approval agent in wealth management screens applications, pulls client context, checks policy constraints, and produces a recommendation with an audit trail. It matters because relationship managers need faster decisions without losing control over compliance, suitability, and risk.
Architecture
- •
Client intake layer
- •Accepts structured loan data: income, liabilities, assets under management, jurisdiction, purpose of loan, and requested amount.
- •Normalizes inputs before they hit the model.
- •
Policy engine
- •Encodes lending rules for LTV, DTI, minimum liquidity, concentration limits, and jurisdiction-specific restrictions.
- •Keeps hard constraints outside the LLM.
- •
LLM reasoning layer
- •Uses LangChain to interpret edge cases, summarize exceptions, and generate a recommendation.
- •Should not be the source of truth for approvals.
- •
Retrieval layer
- •Pulls internal policy docs, credit memo templates, and compliance guidance using
RetrievalQAor a retrieval chain. - •Keeps decisions aligned with current procedures.
- •Pulls internal policy docs, credit memo templates, and compliance guidance using
- •
Audit and decision logging
- •Stores the full input payload, retrieved policy snippets, model output, final decision, and reviewer overrides.
- •Required for model risk management and regulatory review.
- •
Human review handoff
- •Routes borderline cases to a human underwriter or RM.
- •Critical for wealth management where relationship context can override pure scoring.
Implementation
1) Install dependencies and define your inputs
Use LangChain’s current split packages. For this example, I’m using OpenAI as the chat model and FAISS for retrieval.
pip install langchain langchain-openai langchain-community faiss-cpu pydantic
Create a structured request object so your agent does not depend on free-form text.
from pydantic import BaseModel, Field
class LoanApplication(BaseModel):
client_id: str
jurisdiction: str
annual_income: float = Field(gt=0)
liquid_assets: float = Field(ge=0)
existing_debt: float = Field(ge=0)
requested_amount: float = Field(gt=0)
purpose: str
relationship_value_usd: float = Field(ge=0)
2) Build a policy retriever from internal documents
In wealth management you need to answer “why was this decision made?” with references to policy text. Load internal docs into a vector store and retrieve them at decision time.
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
loader = TextLoader("loan_policy.md", encoding="utf-8")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=100)
chunks = splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = FAISS.from_documents(chunks, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
3) Create the LangChain chain that produces a recommendation
Use ChatOpenAI plus ChatPromptTemplate and StrOutputParser. The prompt should force the model to explain itself in policy terms and return a conservative recommendation when data is missing.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a loan approval assistant for wealth management. "
"Follow policy strictly. If data is incomplete or borderline, recommend manual review. "
"Do not invent facts. Cite relevant policy excerpts in your reasoning."),
("human",
"Application:\n{application}\n\nPolicy excerpts:\n{policy}\n\n"
"Return:\n"
"1. Decision: APPROVE / DECLINE / REVIEW\n"
"2. Reasoning\n"
"3. Key risks\n"
"4. Missing information")
])
chain = prompt | llm | StrOutputParser()
Now wire it together with basic rule checks before the LLM sees anything. Hard constraints belong in code.
def hard_rules(app: LoanApplication) -> str:
dti = app.existing_debt / max(app.annual_income, 1)
ltv_proxy = app.requested_amount / max(app.liquid_assets + app.relationship_value_usd * 0.2, 1)
if app.jurisdiction.lower() in {"sanctioned", "restricted"}:
return "DECLINE: jurisdiction restriction"
if dti > 0.45:
return f"REVIEW: DTI too high ({dti:.2f})"
if ltv_proxy > 0.75:
return f"REVIEW: exposure too high ({ltv_proxy:.2f})"
return "PASS"
def evaluate_application(app: LoanApplication) -> dict:
rule_result = hard_rules(app)
if rule_result != "PASS":
return {
"decision": rule_result.split(":")[0],
"reason": rule_result,
"audit": {"rule_result": rule_result}
}
policy_docs = retriever.invoke(
f"Loan approval rules for {app.jurisdiction}, income {app.annual_income}, "
f"liquid assets {app.liquid_assets}, debt {app.existing_debt}, purpose {app.purpose}"
)
policy_text = "\n\n".join(doc.page_content for doc in policy_docs)
result = chain.invoke({
"application": app.model_dump_json(indent=2),
"policy": policy_text
})
return {
"decision": "REVIEW",
"reason": result,
"audit": {
"application": app.model_dump(),
"policy_sources": [doc.metadata for doc in policy_docs]
}
}
4) Run the agent and persist the audit trail
For production you want every decision stored with traceability. This example prints the result; in practice write it to your case management system or database.
sample = LoanApplication(
client_id="C12345",
jurisdiction="UK",
annual_income=350000,
liquid_assets=900000,
existing_debt=120000,
requested_amount=250000,
purpose="Property acquisition",
relationship_value_usd=2200000
)
decision = evaluate_application(sample)
print(decision["decision"])
print(decision["reason"])
Production Considerations
- •
Compliance controls
- •Keep approval thresholds in deterministic code or config managed by compliance.
- •Use the LLM only for explanation and exception handling.
- •Log every retrieved policy chunk so reviewers can reconstruct the decision path.
- •
Data residency
- •Wealth clients often have strict regional storage requirements.
- •Pin embeddings, vector stores, logs, and model endpoints to approved regions.
- •Avoid sending unnecessary PII into prompts; redact account numbers and tax IDs before inference.
- •
Monitoring
- •Track approval rates by jurisdiction, RM team, product type, and client segment.
- •Watch for drift in borderline-review volume; it often signals policy changes or bad prompts.
- •Capture latency separately for retrieval and generation so you can tune each layer.
- •
Guardrails
- •Reject any application that lacks required fields before calling the model.
- •Enforce maximum exposure limits outside the LLM.
- •Add human approval for politically exposed persons, cross-border lending, or complex trust structures.
Common Pitfalls
- •
Letting the LLM make final credit decisions
- •Avoid this by making hard eligibility checks deterministic.
- •The model should recommend; your rules engine should decide on clear violations.
- •
Skipping auditability
- •If you cannot show which policy text influenced the output, you do not have a production system.
- •Store application payloads, retrieved documents, prompt versions, model version, and final reviewer action.
- •
Ignoring wealth-management context
- •A high-net-worth client may look risky on paper but still qualify under relationship-based policies.
- •Encode those exceptions explicitly instead of hoping the model infers them from vague prompts.
- •
Using unbounded retrieval
- •Pulling too many documents increases noise and weakens decisions.
- •Keep retrieval narrow with
klimits and curated document sets per product or jurisdiction.
A good loan approval agent does three things well: applies hard rules consistently, explains decisions with cited policy evidence, and escalates edge cases cleanly to humans. In wealth management that balance matters more than raw automation because compliance failures are expensive and trust is harder to rebuild than code.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit