How to Build a underwriting Agent Using LlamaIndex in TypeScript for fintech
An underwriting agent automates the first pass of credit or risk assessment by reading applicant data, pulling policy and compliance rules, and producing a structured recommendation with evidence. In fintech, that matters because you want faster decisions without turning your model into a black box that compliance, audit, and risk teams can’t inspect.
Architecture
- •
Applicant data ingestion
- •Pulls structured inputs from KYC, bank statements, payroll, transaction history, and application forms.
- •Normalizes the payload into a consistent schema before any LLM call.
- •
Policy and compliance knowledge base
- •Stores underwriting rules, product policy docs, risk thresholds, and regulatory guidance.
- •Indexed with LlamaIndex so the agent can retrieve relevant policy snippets before deciding.
- •
Decision engine
- •Uses an LLM to classify the application as
approve,review, ordecline. - •Forces structured output so downstream systems do not parse free-form prose.
- •Uses an LLM to classify the application as
- •
Evidence layer
- •Captures retrieved policy chunks and source metadata.
- •Needed for audit trails, model governance, and manual review.
- •
Guardrails
- •Prevents the agent from making unsupported decisions.
- •Enforces “no decision without evidence” and blocks PII leakage in logs.
Implementation
1) Set up a policy index with LlamaIndex
For underwriting, the useful pattern is not “chat with documents.” It is “retrieve policy context, then make a constrained decision.” Start by indexing underwriting policy docs with VectorStoreIndex.
import { Document } from "llamaindex";
import { VectorStoreIndex } from "llamaindex";
import { OpenAIEmbedding } from "@llamaindex/openai";
async function buildPolicyIndex() {
const docs = [
new Document({
text: `
Personal loan policy:
- Minimum credit score: 650
- Debt-to-income ratio must be below 40%
- Manual review required if recent delinquencies > 2
`,
metadata: { source: "policy/personal-loan.md", version: "2025-01" },
}),
new Document({
text: `
Compliance notes:
- Do not store raw account numbers in logs
- Keep applicant decision evidence for 7 years
- Data residency: EU customer records must remain in eu-west-1
`,
metadata: { source: "compliance/retention.md", version: "2025-01" },
}),
];
return await VectorStoreIndex.fromDocuments(docs, {
embedModel: new OpenAIEmbedding(),
});
}
This gives you a retriever over policy content. In production, replace the inline docs with files from your document store or CMS.
2) Define a strict underwriting output schema
You want deterministic output. Use zod to define what the agent is allowed to return, then ask LlamaIndex to produce structured data through asStructuredLLM.
import { z } from "zod";
import { OpenAI } from "@llamaindex/openai";
const UnderwritingDecisionSchema = z.object({
decision: z.enum(["approve", "review", "decline"]),
reason: z.string(),
riskScore: z.number().min(0).max(100),
evidence: z.array(
z.object({
source: z.string(),
snippet: z.string(),
})
),
});
const llm = new OpenAI({ model: "gpt-4o-mini", temperature: 0 });
const structuredLLM = llm.asStructuredLLM(UnderwritingDecisionSchema);
This is the difference between an assistant and an underwriting system. The schema gives you a stable contract for orchestration and audit logging.
3) Build the underwriting query flow
Use the retriever to fetch relevant policy context, then pass both applicant facts and retrieved evidence into the structured LLM. Keep the prompt narrow; underwriting should be rule-driven.
import { QueryEngineTool } from "llamaindex";
type Applicant = {
applicantId: string;
creditScore: number;
dtiRatio: number;
recentDelinquencies: number;
};
async function underwriteApplicant(applicant: Applicant) {
const index = await buildPolicyIndex();
const retriever = index.asRetriever({ similarityTopK: 3 });
const query = `
Underwrite this applicant using only the provided policy context.
Applicant:
- creditScore: ${applicant.creditScore}
- dtiRatio: ${applicant.dtiRatio}
- recentDelinquencies: ${applicant.recentDelinquencies}
Return a decision with evidence.
If evidence is insufficient, choose review.
`;
const nodes = await retriever.retrieve({ query });
const context = nodes
.map((node) => `SOURCE=${node.node.metadata?.source}\n${node.node.getContent()}`)
.join("\n\n");
const result = await structuredLLM.complete({
prompt: `
Policy context:
${context}
Applicant facts:
creditScore=${applicant.creditScore}
dtiRatio=${applicant.dtiRatio}
recentDelinquencies=${applicant.recentDelinquencies}
Decision rules:
- Approve only if all hard thresholds are met
- Review if any threshold is close or ambiguous
- Decline only if policy clearly supports it
`,
});
return result.raw;
}
The important part is that the model sees both facts and retrieved policy. You are not asking it to invent underwriting logic; you are asking it to apply known rules.
4) Add an audit trail around every decision
Fintech systems need traceability. Persist input facts, retrieved sources, model version, and final output together so compliance can replay decisions later.
async function run() {
const decision = await underwriteApplicant({
applicantId: "app_123",
creditScore: 682,
dtiRatio: 0.37,
recentDelinquencies: 1,
});
console.log(JSON.stringify({
applicantId: "app_123",
model: "gpt-4o-mini",
timestamp: new Date().toISOString(),
decision,
auditTags: {
domain: "underwriting",
residencyRegion: "eu-west-1",
retentionYears: 7,
},
}, null, 2));
}
run();
In a real service, send this record to your audit store or event bus. Do not rely on application logs alone.
Production Considerations
- •
Deployment
- •Keep PII-sensitive workloads in-region for your residency requirements.
- •Separate retrieval infrastructure from customer-facing APIs so you can rotate indexes without downtime.
- •
Monitoring
- •Track approval rate by segment, manual review rate, retrieval hit quality, and schema validation failures.
- •
Guardrails
Wait—this section should be concrete:
- •Monitoring
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit