How to Build a loan approval Agent Using LangChain in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchaintypescriptinsurance

A loan approval agent for insurance reviews an application, checks policy and underwriting rules, pulls the right customer and claim context, and returns a decision with a reason trail. It matters because insurance teams need consistent decisions, faster turnaround, and an audit record that can stand up to compliance review.

Architecture

  • Input layer

    • Accepts loan request data from your CRM, policy admin system, or underwriting portal.
    • Normalizes fields like applicant identity, requested amount, policy status, loss history, and repayment source.
  • Retrieval layer

    • Pulls internal rules, underwriting guidelines, product constraints, and jurisdiction-specific compliance notes.
    • Use VectorStoreRetriever or a simple document retriever for controlled context injection.
  • Decision engine

    • Uses a LangChain RunnableSequence or ChatPromptTemplate + model call to classify the request.
    • Produces structured output: approve, reject, or needs_manual_review.
  • Policy guardrail layer

    • Enforces hard rules outside the model: minimum policy age, active coverage, no open fraud flags, residency constraints.
    • This is where you keep deterministic checks that should never be left to the LLM.
  • Audit and logging layer

    • Stores input payload hash, retrieved documents, model response, and final decision.
    • Required for insurance audits, complaint handling, and regulatory traceability.
  • Human review fallback

    • Routes ambiguous or high-risk cases to an underwriter.
    • Essential for exceptions, adverse action reasons, and edge cases.

Implementation

1) Install the LangChain packages

For TypeScript, keep the stack small and explicit. Use the LangChain core packages plus one chat model provider.

npm install langchain @langchain/core @langchain/openai zod

Set your environment variables before running anything:

export OPENAI_API_KEY="your-key"

2) Define the decision schema and prompt

Insurance workflows need structured output. A free-form answer is not enough when downstream systems expect a machine-readable decision plus reason codes.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const LoanDecisionSchema = z.object({
  decision: z.enum(["approve", "reject", "needs_manual_review"]),
  reasonCode: z.string(),
  explanation: z.string(),
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a loan approval agent for an insurance company.
Follow these rules:
- Approve only if all hard rules pass.
- Reject if there is an explicit policy violation.
- Otherwise return needs_manual_review.
- Keep explanations concise and factual.
- Never invent missing facts.`,
  ],
  [
    "human",
    `Applicant:
{applicantJson}

Policy context:
{policyContext}

Underwriting notes:
{underwritingNotes}

Return a JSON object matching this schema:
{schema}`,
  ],
]);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

3) Add deterministic guardrails before the LLM

Do not ask the model to enforce hard compliance rules. That belongs in code.

type Applicant = {
  id: string;
  policyActive: boolean;
  policyAgeMonths: number;
  fraudFlag: boolean;
  residencyCountry: string;
};

function hardRules(applicant: Applicant) {
  if (!applicant.policyActive) {
    return { allowed: false, reasonCode: "POLICY_INACTIVE" };
  }

  if (applicant.policyAgeMonths < 6) {
    return { allowed: false, reasonCode: "POLICY_TOO_NEW" };
  }

  if (applicant.fraudFlag) {
    return { allowed: false, reasonCode: "FRAUD_FLAG_PRESENT" };
  }

  if (!["US", "CA", "GB"].includes(applicant.residencyCountry)) {
    return { allowed: false, reasonCode: "JURISDICTION_NOT_SUPPORTED" };
  }

    return { allowed: true as const };
}

4) Build the LangChain runnable pipeline

This is the actual pattern you want in production: validate first, then call the model with bounded context. Parse the result into a typed object so your API never depends on raw text.

import { RunnableLambda } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

const chain = prompt.pipe(model).pipe(new StringOutputParser());

export async function evaluateLoanRequest(input: {
  applicant: Applicant;
  policyContext: string;
  underwritingNotes: string;
}) {
  const guardrail = hardRules(input.applicant);

  if (!guardrail.allowed) {
    return {
      decision: "reject",
      reasonCode: guardrail.reasonCode,
      explanation: `Blocked by hard rule: ${guardrail.reasonCode}`,
    };
    }

  const result = await chain.invoke({
    applicantJson: JSON.stringify(input.applicant),
    policyContext: input.policyContext,
    underwritingNotes: input.underwritingNotes,
    schema: JSON.stringify(LoanDecisionSchema.shape),
  });

  const parsed = LoanDecisionSchema.parse(JSON.parse(result));
  
return parsed;
}

If you want a stricter contract at the boundary, wrap it in a runnable that returns only validated objects:

const validatedDecision = RunnableLambda.from(async (input) => {
  
const raw = await evaluateLoanRequest(input);
  
return LoanDecisionSchema.parse(raw);
});

Production Considerations

  • Audit every decision path

Store:

  • input payload hash
  • retrieved policy documents
  • guardrail outcome
  • model version
  • final decision

This is non-negotiable for insurance compliance and dispute resolution.

  • Keep regulated data in-region

If your insurer has data residency requirements, pin model inference and vector storage to approved regions. Do not send PII or claims history to services outside your jurisdiction without legal review.

  • Separate hard rules from model judgment

Anything tied to eligibility, sanctions screening, fraud flags, or jurisdictional restrictions should be enforced in code. The LLM should explain decisions and handle gray areas, not override compliance logic.

  • Add human review thresholds

Route low-confidence or high-impact cases to an underwriter. Examples:

  • large loan amount
  • conflicting policy records
  • missing income proof
  • adverse claim history

Common Pitfalls

  1. Letting the LLM make compliance decisions

    If you ask the model whether a customer passes residency or fraud checks, you will eventually get inconsistent answers. Put those checks in deterministic TypeScript functions before any model call.

  2. Passing too much sensitive context

    Dumping full claim files into the prompt increases privacy risk and token cost. Retrieve only what is needed for the specific decision and redact PII where possible.

  3. Skipping structured output validation

    A plain text answer breaks downstream automation fast. Always parse with Zod or a similar schema validator before writing into case management systems.

A loan approval agent in insurance works when it is narrow, auditable, and boring in the right places. Use LangChain for orchestration and reasoning; use TypeScript for enforcement; use your compliance team’s rules as code.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides