How to Build a underwriting Agent Using LangGraph in TypeScript for fintech
An underwriting agent decides whether a fintech application should be approved, rejected, or sent for manual review. It matters because underwriting sits on the critical path for credit, lending, and embedded finance products, where bad decisions create losses, compliance issues, and customer friction.
Architecture
A production underwriting agent in fintech needs a small set of hard boundaries:
- •Application intake node
- •Normalizes applicant data from KYC, bank statements, bureau data, and internal product signals.
- •Feature extraction node
- •Converts raw inputs into underwriting features like income stability, debt burden, transaction volatility, and fraud risk.
- •Policy/rules node
- •Applies deterministic business rules before any model call.
- •Example: reject if sanctions hit exists or if required documents are missing.
- •LLM reasoning node
- •Produces a structured recommendation and explanation from the extracted features.
- •Keep it constrained to summarization and rationale, not free-form decision making.
- •Decision node
- •Converts model output into
approve,decline, ormanual_review.
- •Converts model output into
- •Audit node
- •Persists every input, intermediate state, and final decision for compliance review.
Implementation
1) Define the graph state and output contract
Keep the state explicit. In underwriting, hidden state is how you end up with decisions you cannot explain to compliance or auditors.
import { Annotation, END, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const UnderwritingState = Annotation.Root({
applicantId: Annotation<string>(),
rawApplication: Annotation<any>(),
features: Annotation<Record<string, any>>(),
policyFlags: Annotation<string[]>(),
recommendation: Annotation<"approve" | "decline" | "manual_review">(),
rationale: Annotation<string>(),
});
const DecisionSchema = z.object({
recommendation: z.enum(["approve", "decline", "manual_review"]),
rationale: z.string().min(20),
});
2) Build deterministic preprocessing and policy checks
Do not ask the LLM to discover obvious rule violations. Put those in code so they are testable and auditable.
const extractFeatures = async (state: typeof UnderwritingState.State) => {
const app = state.rawApplication;
return {
applicantId: state.applicantId,
features: {
monthlyIncome: app.monthlyIncome,
debtToIncome: app.debtToIncome,
bankBalanceAvg30d: app.bankBalanceAvg30d,
nsfCount90d: app.nsfCount90d,
bureauScore: app.bureauScore,
country: app.country,
},
policyFlags: [],
};
};
const applyPolicy = async (state: typeof UnderwritingState.State) => {
const flags = [...(state.policyFlags ?? [])];
const f = state.features;
if (!f) return { policyFlags: ["missing_features"] };
if (f.country !== "ZA" && f.country !== "KE") {
flags.push("unsupported_residency");
}
if ((f.bureauScore ?? 0) < 450) {
flags.push("hard_floor_bureau_score");
}
if ((f.nsfCount90d ?? 0) > 6) {
flags.push("high_cashflow_instability");
}
return { policyFlags: flags };
};
3) Add an LLM reasoning node with structured output
Use the model for explanation and borderline judgment only. The output should be schema-bound so you can validate it before persisting or acting on it.
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
const reasonAboutRisk = async (state: typeof UnderwritingState.State) => {
const prompt = `
You are assisting a fintech underwriting workflow.
Use only the provided features and policy flags.
Return a recommendation and concise rationale suitable for audit review.
Features:
${JSON.stringify(state.features, null, 2)}
Policy flags:
${JSON.stringify(state.policyFlags ?? [], null, 2)}
`;
const response = await llm.withStructuredOutput(DecisionSchema).invoke(prompt);
return {
recommendation: response.recommendation,
rationale: response.rationale,
};
};
4) Wire the graph and run it
This pattern gives you deterministic gates first, then model reasoning, then a final decision path.
const decideFinalOutcome = async (state: typeof UnderwritingState.State) => {
const flags = state.policyFlags ?? [];
if (flags.includes("unsupported_residency")) {
return { recommendation: "manual_review" as const };
}
if (flags.includes("hard_floor_bureau_score")) {
return { recommendation: "decline" as const };
}
return {
recommendation:
state.recommendation === "approve" || state.recommendation === "decline"
? state.recommendation
: "manual_review",
};
};
const graph = new StateGraph(UnderwritingState)
.addNode("extractFeatures", extractFeatures)
.addNode("applyPolicy", applyPolicy)
.addNode("reasonAboutRisk", reasonAboutRisk)
.addNode("decideFinalOutcome", decideFinalOutcome)
.addEdge("__start__", "extractFeatures")
.addEdge("extractFeatures", "applyPolicy")
.addEdge("applyPolicy", "reasonAboutRisk")
.addEdge("reasonAboutRisk", "decideFinalOutcome")
.addEdge("decideFinalOutcome", END)
.compile();
async function run() {
const result = await graph.invoke({
applicantId: "app_123",
rawApplication: {
monthlyIncome: 4200,
debtToIncome: .31,
bankBalanceAvg30d: .42,
nsfCount90d:2,
bureauScore :612 ,
country:"ZA"
},
});
console.log(result);
}
run();
Production Considerations
- •
Keep PII out of prompts unless strictly necessary
Mask account numbers, IDs, and free-text notes before sending data to the model. For fintech workloads, store raw records in your own systems and pass only derived features to the graph.
- •
Log every transition for auditability
Persist input state, policy flags, model output, final decision, model version, prompt version, and timestamp. Compliance teams will ask why a customer was declined six months later.
- •
Enforce data residency at the orchestration layer
If your lending book is region-bound, route inference to region-specific infrastructure and avoid cross-border prompt storage. This is usually a platform control problem more than an application problem.
- •
Add human review thresholds
Any weak signal combination should route to
manual_review. In credit decisioning, a conservative escalation path is cheaper than explaining an opaque decline after the fact.
Common Pitfalls
- •
Letting the LLM make hard policy decisions
Don’t ask it to decide sanctions handling or minimum score thresholds. Put those in code so they are deterministic and easy to test.
- •
Using unstructured text outputs
Free-form answers are hard to validate and impossible to enforce at scale. Use
withStructuredOutput()plus a Zod schema so invalid outputs fail fast. - •
Skipping audit metadata
If you don’t store feature snapshots and prompt versions alongside outcomes, you cannot reconstruct decisions later. That becomes a real problem during disputes, internal reviews, or regulator requests.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit