How to Build a loan approval Agent Using LangChain in Python for healthcare

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchainpythonhealthcare

A loan approval agent for healthcare helps a lender decide whether to approve financing for medical equipment, clinic expansion, or working capital by combining application data, policy rules, and risk signals. It matters because healthcare lending has tighter compliance, stronger audit requirements, and more sensitive data handling than generic SMB lending.

Architecture

  • Application intake layer
    • Collects borrower details, requested amount, purpose, revenue history, and healthcare-specific fields like payer mix or provider type.
  • Document ingestion layer
    • Pulls in bank statements, tax returns, licenses, insurance certificates, and CMS/credentialing documents.
  • Policy and risk reasoning layer
    • Applies underwriting rules such as debt-to-income thresholds, minimum cash reserves, license validity, and excluded use cases.
  • LangChain decision agent
    • Uses ChatPromptTemplate, RunnableLambda, and an LLM to classify the case as approve, reject, or escalate.
  • Audit logging layer
    • Stores inputs, outputs, rule hits, and model rationale for compliance review.
  • Human review handoff
    • Routes borderline cases to an underwriter instead of auto-deciding.

Implementation

1) Define the underwriting schema

Use Pydantic to force structured input. This keeps the agent from reasoning over free-form garbage.

from pydantic import BaseModel, Field
from typing import Literal

class LoanApplication(BaseModel):
    borrower_name: str
    business_type: Literal["hospital", "clinic", "dental", "pharmacy", "other"]
    requested_amount: float = Field(gt=0)
    annual_revenue: float = Field(gt=0)
    monthly_debt_payments: float = Field(ge=0)
    cash_reserves_months: float = Field(ge=0)
    license_valid: bool
    has_active_sanctions_hit: bool
    purpose: str

2) Build deterministic pre-checks before the LLM

Do not ask the model to rediscover basic policy. Put hard rules in code first.

def precheck(app: LoanApplication) -> dict:
    dti = (app.monthly_debt_payments * 12) / app.annual_revenue
    flags = []

    if not app.license_valid:
        flags.append("invalid_license")
    if app.has_active_sanctions_hit:
        flags.append("sanctions_hit")
    if app.cash_reserves_months < 3:
        flags.append("low_cash_reserves")
    if dti > 0.35:
        flags.append("high_dti")

    return {
        "dti": round(dti, 4),
        "flags": flags,
        "hard_fail": any(f in flags for f in ["invalid_license", "sanctions_hit"])
    }

3) Use LangChain to generate a structured decision

This pattern uses ChatPromptTemplate, RunnableLambda, and JsonOutputParser with a real chat model. The model should explain the decision based on policy inputs only.

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda
from langchain_core.output_parsers import JsonOutputParser
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system",
     "You are a healthcare loan underwriting assistant. "
     "Decide APPROVE, REJECT, or ESCALATE using only the provided facts. "
     "Do not mention protected health information. "
     "Return valid JSON with keys: decision, reason, risk_level."),
    ("human",
     "Application: {application}\nPrecheck results: {precheck}\n"
     "Healthcare policy notes: {policy_notes}")
])

parser = JsonOutputParser()

def build_inputs(app: LoanApplication):
    return {
        "application": app.model_dump(),
        "precheck": precheck(app),
        "policy_notes": [
            "Healthcare borrowers require license verification.",
            "Any sanctions hit is a hard reject.",
            "Borderline leverage should be escalated to human review."
        ]
    }

chain = (
    RunnableLambda(build_inputs)
    | prompt
    | llm
    | parser
)

app = LoanApplication(
    borrower_name="Northside Imaging LLC",
    business_type="clinic",
    requested_amount=250000,
    annual_revenue=1200000,
    monthly_debt_payments=18000,
    cash_reserves_months=4,
    license_valid=True,
    has_active_sanctions_hit=False,
    purpose="MRI equipment purchase"
)

result = chain.invoke(app)
print(result)

4) Wrap the decision with routing logic

Use the model output as one input into a final policy gate. In production, this is where you enforce approval thresholds and human escalation.

def final_decision(model_result: dict, app: LoanApplication) -> str:
    checks = precheck(app)

    if checks["hard_fail"]:
        return "REJECT"

    if model_result["decision"] == "APPROVE" and checks["dti"] <= 0.35:
        return "APPROVE"

    return "ESCALATE"

Production Considerations

  • Keep PHI out of prompts
    • Strip patient-level data before the LangChain pipeline. The agent only needs business financials and compliance metadata.
  • Enforce data residency
    • If you operate in regulated regions, keep inference and logs inside approved cloud regions. Don’t send underwriting artifacts across borders without legal review.
  • Log every decision path
    • Store application_id, rule hits, model output, versioned prompts, and final outcome. Auditors will ask why a loan was rejected or escalated.
  • Add guardrails around hallucination
    • Use structured outputs like JSON parsing plus hard-coded policy checks. Never let the LLM be the sole approval authority.

Common Pitfalls

  • Using the LLM as the primary underwriter
    • Bad move. The model should explain or triage decisions; deterministic rules should own hard approvals and rejections.
  • Passing raw clinical data into prompts
    • That creates unnecessary compliance exposure. Remove PHI/PII that is irrelevant to credit assessment.
  • Skipping versioning for prompts and policies
    • If your approval logic changes without version control, you cannot reproduce past decisions during audits or disputes.

A solid healthcare loan approval agent is mostly disciplined plumbing: strict schemas, deterministic policy gates, structured LLM output, and auditability. If you get those right with LangChain’s Runnable patterns and clean separation of concerns, you can ship something that survives both production traffic and compliance review.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides