How to Build a underwriting Agent Using LangChain in TypeScript for fintech
An underwriting agent automates the first pass of credit or risk assessment: it gathers applicant data, checks policy rules, scores the application, and produces a decision recommendation with an audit trail. For fintech, that matters because you need faster approvals without losing control over compliance, explainability, and consistency.
Architecture
- •
Input adapter
- •Normalizes applicant payloads from your API into a structured internal schema.
- •Handles KYC/KYB fields, income, liabilities, transaction summaries, and consent flags.
- •
Policy retrieval layer
- •Pulls underwriting rules from a controlled source like a vector store or document store.
- •Keeps product terms, eligibility thresholds, and exclusions versioned.
- •
LLM reasoning layer
- •Uses LangChain to summarize evidence and generate a recommendation.
- •Must be constrained to structured output, not free-form chat.
- •
Decision engine
- •Applies deterministic checks for hard declines, manual review triggers, and score bands.
- •The model should assist, not replace, policy logic.
- •
Audit and trace layer
- •Logs inputs, retrieved policy snippets, model outputs, and final decision.
- •Required for internal review, regulator requests, and dispute handling.
- •
Data boundary controls
- •Enforces residency, PII redaction, and tenant isolation.
- •Prevents sensitive data from leaving approved regions or vendors.
Implementation
- •Define the underwriting schema and model client
Use Zod to keep the output structured. In fintech workflows, this is non-negotiable because downstream systems need stable fields for decisions and audit.
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
import { StructuredOutputParser } from "langchain/output_parsers";
const UnderwritingDecisionSchema = z.object({
riskBand: z.enum(["LOW", "MEDIUM", "HIGH"]),
decision: z.enum(["APPROVE", "MANUAL_REVIEW", "DECLINE"]),
rationale: z.string(),
missingInfo: z.array(z.string()).default([]),
policyReferences: z.array(z.string()).default([]),
});
type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;
const parser = StructuredOutputParser.fromZodSchema(UnderwritingDecisionSchema);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
- •Load policy context and build the prompt
For production underwriting, the LLM should only reason over approved policy text. A retrieval step keeps your product rules current without hardcoding them into prompts.
import { PromptTemplate } from "@langchain/core/prompts";
const underwritingPrompt = new PromptTemplate({
template: `
You are an underwriting assistant for a fintech lender.
Use only the policy context and applicant facts below.
If information is missing or contradictory, flag MANUAL_REVIEW.
Policy Context:
{policyContext}
Applicant Facts:
{applicantFacts}
Return your answer in this format:
{format_instructions}
`,
inputVariables: ["policyContext", "applicantFacts"],
partialVariables: {
format_instructions: parser.getFormatInstructions(),
},
});
- •Create deterministic pre-checks before the LLM call
This is where you enforce hard business rules. If an applicant violates a non-negotiable rule, do not ask the model to “decide around it.”
interface Applicant {
name: string;
country: string;
annualIncome: number;
monthlyDebtPayments: number;
consentGiven: boolean;
}
function precheck(applicant: Applicant) {
if (!applicant.consentGiven) {
return { decision: "DECLINE", reason: "Missing consent" as const };
}
const dti = applicant.monthlyDebtPayments / (applicant.annualIncome / 12);
if (dti > 0.5) {
return { decision: "MANUAL_REVIEW", reason: "Debt-to-income above threshold" as const };
}
return { decision: "PASS" as const };
}
- •Run the agent workflow with LangChain
This pattern keeps the agent small and auditable. You can swap the policy source later without changing the control flow.
export async function underwriteApplicant(applicant: Applicant): Promise<UnderwritingDecision> {
const gate = precheck(applicant);
if (gate.decision !== "PASS") {
return {
riskBand: gate.decision === "DECLINE" ? "HIGH" : "MEDIUM",
decision: gate.decision,
rationale: gate.reason,
missingInfo: [],
policyReferences: ["precheck-rulebook-v1"],
};
}
const policyContext = [
"- Approve if income > $40k and DTI <= 35%",
"- Manual review if income documentation is incomplete",
"- Decline if sanctions screening is unresolved",
"- Country must be in supported jurisdiction list",
].join("\n");
const applicantFacts = JSON.stringify(
{
name: applicant.name,
country: applicant.country,
annualIncome: applicant.annualIncome,
monthlyDebtPayments: applicant.monthlyDebtPayments,
dtiRatio: Number((applicant.monthlyDebtPayments / (applicant.annualIncome / 12)).toFixed(2)),
},
null,
2
);
const chain = underwritingPrompt.pipe(llm).pipe(parser);
const result = await chain.invoke({ policyContext, applicantFacts });
return result;
}
Production Considerations
- •
Deploy in-region
- •Keep inference endpoints in the same jurisdiction as customer data.
- •If your bank operates in multiple regions, route requests by tenant and residency rules.
- •
Log everything needed for audit
- •Store prompt version, policy version, retrieved documents, final output, and human overrides.
- •Use immutable logs so compliance teams can reconstruct every decision path.
- •
Add guardrails before generation
- •Redact SSNs, account numbers, passport IDs, and full card data before sending text to the model.
- •
Monitor drift and override rates
| Metric | Why it matters |
|---|---|
| Manual review rate | Signals bad thresholds or weak policy coverage |
| Override rate | Shows whether analysts disagree with model recommendations |
| Decline reason distribution | Helps catch broken prompts or stale policies |
| Latency p95 | Impacts application completion rates |
Common Pitfalls
- •Letting the model make final decisions
Avoid this by keeping approval logic in deterministic code. The LLM should explain and classify edge cases; it should not own credit policy.
- •Sending raw PII into prompts
Mask or tokenize sensitive fields before calling ChatOpenAI. For regulated fintech workloads, assume every prompt could be reviewed later by security or compliance.
- •Skipping versioning on policies and prompts
If you cannot reproduce a prior decision exactly, you do not have an auditable underwriting system. Version prompts, retrieval sources, thresholds, and model IDs together.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit