How to Build a loan approval Agent Using LangChain in TypeScript for healthcare
A loan approval agent for healthcare evaluates financing requests for patients, clinics, or providers and returns a decision with reasons, required documents, and escalation paths. It matters because healthcare lending is not just about credit risk; it also touches protected data, regulated workflows, and auditability requirements that can’t be handled by a generic chatbot.
Architecture
- •
Input normalization layer
- •Takes raw application data from forms, CRM, or case management systems.
- •Validates required fields like applicant type, requested amount, repayment horizon, and consent flags.
- •
Policy and eligibility engine
- •Encodes business rules such as minimum credit score, debt-to-income thresholds, provider accreditation checks, and treatment category restrictions.
- •Separates deterministic rules from LLM reasoning.
- •
LangChain decision agent
- •Uses
ChatOpenAIwith tool calling to summarize the application and decide whether to approve, reject, or escalate. - •Produces structured output so downstream systems don’t parse free text.
- •Uses
- •
Document retrieval layer
- •Pulls supporting policy docs, underwriting rules, and healthcare compliance references from a vector store.
- •Keeps the model grounded in current internal policy.
- •
Audit and trace store
- •Persists every prompt, tool call, retrieved document ID, and final decision.
- •Required for model governance and post-decision review in regulated environments.
- •
Human review queue
- •Routes borderline cases to an underwriter or compliance officer.
- •Used when confidence is low or the case involves sensitive healthcare scenarios.
Implementation
1) Install the LangChain packages you actually need
For TypeScript projects, keep the dependency surface small. You need an LLM provider package plus core LangChain primitives.
npm install langchain @langchain/openai zod
Set your environment variables:
export OPENAI_API_KEY="your-key"
2) Define a strict decision schema
Do not let the model return plain text. Use zod with StructuredOutputParser so your API gets predictable fields every time.
import { z } from "zod";
import { StructuredOutputParser } from "langchain/output_parsers";
const DecisionSchema = z.object({
decision: z.enum(["approve", "reject", "escalate"]),
riskScore: z.number().min(0).max(100),
reason: z.string(),
requiredDocuments: z.array(z.string()),
auditNotes: z.string(),
});
const parser = StructuredOutputParser.fromZodSchema(DecisionSchema);
export type LoanDecision = z.infer<typeof DecisionSchema>;
3) Build the LangChain decision chain
This example uses ChatOpenAI, PromptTemplate, and RunnableSequence. The prompt includes healthcare-specific constraints like consent handling and escalation for sensitive cases.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { parser } from "./schema";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const prompt = PromptTemplate.fromTemplate(`
You are a loan underwriting assistant for healthcare financing.
Rules:
- Never approve if consent is missing.
- Escalate if the request involves PHI beyond what is needed for underwriting.
- Use only the provided application data and policy context.
- Return structured output matching these instructions:
{format_instructions}
Application:
{application}
Policy context:
{policy}
`);
export const loanApprovalChain = RunnableSequence.from([
{
application: (input: { application: string; policy: string }) => input.application,
policy: (input: { application: string; policy: string }) => input.policy,
format_instructions: () => parser.getFormatInstructions(),
},
prompt,
llm,
new StringOutputParser(),
]).pipe(parser);
4) Run the agent with deterministic guardrails
Keep deterministic checks outside the LLM. If consent is missing or residency rules fail, reject before calling the model. That reduces cost and prevents unnecessary exposure of regulated data.
type ApplicationInput = {
applicantId: string;
consentGiven: boolean;
countryOfResidence: string;
requestedAmount: number;
};
function preCheck(input: ApplicationInput) {
if (!input.consentGiven) {
return { decision: "reject", reason: "Missing patient consent" as const };
}
if (input.countryOfResidence !== "US") {
return { decision: "escalate", reason: "Data residency review required" as const };
}
return null;
}
async function evaluateApplication() {
const app = {
applicantId: "app_123",
consentGiven: true,
countryOfResidence: "US",
requestedAmount: 25000,
incomeBand: "stable",
debtToIncomeRatio: 0.31,
providerType: "clinic",
treatmentCategory: "orthopedics",
};
const policy = `
Approve if DTI < 0.40 and amount <= $50k.
Escalate if provider accreditation is missing.
Reject if consent is false or treatment category is excluded.
`;
const precheckResult = preCheck(app);
if (precheckResult) {
console.log(precheckResult);
return;
}
const result = await loanApprovalChain.invoke({
application: JSON.stringify(app),
policy,
});
console.log(result);
}
evaluateApplication();
Production Considerations
- •
Separate PHI from underwriting context
- •Only send minimum necessary data into LangChain prompts.
- •Mask identifiers before inference unless they are strictly required for scoring.
- •
Log everything for audit
- •Store input hashes, prompt versions, retrieved policy document IDs, model version, output schema version, and final disposition.
- •In healthcare lending, you need to explain why a case was approved or escalated months later.
- •
Enforce data residency
- •Pin inference to approved regions and make sure vector stores do not replicate regulated data across jurisdictions.
- •If your deployment spans multiple countries, route cases by residency before any retrieval call.
- •
Add human-in-the-loop thresholds
| Condition | Action |
|---|---|
| Missing consent | Reject immediately |
| Low confidence score | Escalate to underwriter |
| Policy conflict | Escalate to compliance |
| High-value request | Require manual review |
Common Pitfalls
- •
Letting the model make policy decisions without hard rules
- •Fix this by running deterministic checks before the LLM.
- •The model should explain decisions, not invent eligibility criteria.
- •
Sending too much healthcare data into prompts
- •Fix this by redacting PHI and using minimum necessary fields only.
- •If you need clinical context, retrieve only approved summaries instead of raw notes.
- •
Skipping structured outputs
- •Fix this by using
StructuredOutputParseror another schema-bound pattern. - •Free-text approvals break downstream automation and make audits painful.
- •Fix this by using
- •
Ignoring jurisdiction-specific storage rules
- •Fix this by tagging each request with region metadata and routing it to compliant infrastructure.
- •Healthcare finance often fails on residency before it fails on model quality.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit