How to Build a loan approval Agent Using LangChain in Python for pension funds
A loan approval agent for pension funds takes an application, checks it against policy, risk, and compliance rules, then produces a decision recommendation with an audit trail. For pension funds, this matters because lending decisions affect member capital, regulatory exposure, and fiduciary duty, so the agent has to be deterministic where it counts and explainable everywhere else.
Architecture
- •Application intake layer
- •Normalizes borrower data, loan terms, collateral, and fund-specific constraints into a structured schema.
- •Policy retrieval layer
- •Pulls pension fund lending policy, credit thresholds, concentration limits, and jurisdiction rules from a trusted knowledge base using
RetrievalQAor a retrieval chain.
- •Pulls pension fund lending policy, credit thresholds, concentration limits, and jurisdiction rules from a trusted knowledge base using
- •Decision engine
- •Uses an LLM through
ChatOpenAIto summarize findings and recommend approve/review/decline. - •Keeps hard rules outside the model.
- •Uses an LLM through
- •Tool layer
- •Calls deterministic Python functions for debt-to-income checks, exposure caps, KYC status, and sanctions screening.
- •Audit and logging layer
- •Stores inputs, retrieved policy snippets, tool outputs, and final reasoning for compliance review.
- •Human review fallback
- •Routes borderline or high-risk cases to an underwriter when confidence is low or policy conflicts exist.
Implementation
- •
Define the application schema and hard checks
Keep the core underwriting logic in Python. The model should not invent ratios or override policy thresholds.
from pydantic import BaseModel, Field
from typing import Literal
class LoanApplication(BaseModel):
applicant_id: str
annual_income: float = Field(gt=0)
existing_debt: float = Field(ge=0)
requested_amount: float = Field(gt=0)
loan_term_months: int = Field(gt=0)
jurisdiction: str
kyc_passed: bool
sanctions_cleared: bool
def debt_to_income(app: LoanApplication) -> float:
return (app.existing_debt + app.requested_amount / app.loan_term_months) / app.annual_income
def hard_policy_checks(app: LoanApplication) -> list[str]:
issues = []
if not app.kyc_passed:
issues.append("KYC failed")
if not app.sanctions_cleared:
issues.append("Sanctions screening failed")
if debt_to_income(app) > 0.35:
issues.append("DTI exceeds 35% threshold")
return issues
- •
Load pension fund policy into retrieval
Use
PyPDFLoader,RecursiveCharacterTextSplitter,FAISS, andRetrievalQAto ground the agent in the fund’s actual lending policy. This is where you encode concentration limits, approved jurisdictions, and exceptions process.
from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain.chains import RetrievalQA
loader = PyPDFLoader("pension_fund_lending_policy.pdf")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=120)
chunks = splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = FAISS.from_documents(chunks, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
qa = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
chain_type="stuff",
)
- •
Wrap the model with tools and produce a decision
Use
@toolfor deterministic checks andAgentExecutorif you want the model to decide when to call tools. For a loan approval flow, keep tool use narrow and explicit.
from langchain_core.tools import tool
from langchain.agents import initialize_agent, AgentType
@tool
def run_hard_checks(application_json: str) -> str:
"""Run fixed underwriting checks on a loan application JSON string."""
import json
data = LoanApplication.model_validate(json.loads(application_json))
issues = hard_policy_checks(data)
return "PASS" if not issues else "; ".join(issues)
@tool
def lookup_policy(question: str) -> str:
"""Retrieve pension fund policy guidance."""
return qa.run(question)
tools = [run_hard_checks, lookup_policy]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=False,
)
def assess_application(app: LoanApplication) -> dict:
app_json = app.model_dump_json()
hard_result = run_hard_checks.invoke(app_json)
if hard_result != "PASS":
return {
"decision": "decline",
"reason": hard_result,
"audit": {"hard_checks": hard_result},
}
policy_context = lookup_policy.invoke(
f"Does this application fit policy for jurisdiction {app.jurisdiction}?"
)
prompt = f"""
You are assessing a loan for a pension fund.
Application: {app_json}
Policy context: {policy_context}
Return only one of: approve, review, decline.
Also include a short reason.
"""
response = llm.invoke(prompt)
return {
"decision": response.content.strip(),
"audit": {
"hard_checks": hard_result,
"policy_context": policy_context,
"model_output": response.content,
},
}
- •
Add structured output for downstream systems
Pension operations teams need machine-readable decisions. Use
with_structured_output()so your case management system gets stable fields instead of free text.
from pydantic import BaseModel
class Decision(BaseModel):
decision: Literal["approve", "review", "decline"]
reason: str
structured_llm = llm.with_structured_output(Decision)
result = structured_llm.invoke(
f"Assess this loan application for a pension fund:\n{app.model_dump_json()}"
)
print(result.decision, result.reason)
Production Considerations
- •Data residency
- •Keep embeddings, vector stores, logs, and model endpoints in-region if your pension fund operates under local residency requirements.
- •Auditability
- •Persist every decision with input payloads, retrieved policy chunks, tool outputs, model version, prompt version, and timestamp.
- •Guardrails
- •Enforce hard blocks outside the LLM for KYC failure, sanctions hits, exposure caps, and prohibited jurisdictions.
- •Monitoring
- •Track approval rates by segment, override rates by underwriters, retrieval quality drift, and cases routed to manual review.
Common Pitfalls
- •Letting the LLM make final credit decisions without fixed rules
- •Avoid this by running deterministic checks first and using the model only for explanation or borderline assessment.
- •Retrieving generic policy instead of fund-specific policy
- •Pension funds often have stricter concentration limits and governance rules than retail lenders. Index only approved internal documents.
- •Skipping audit context
- •If you cannot reconstruct why a decision was made six months later, the system is not production-ready for regulated lending.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit