How to Build a underwriting Agent Using LangGraph in TypeScript for wealth management
An underwriting agent for wealth management evaluates client applications, portfolio constraints, suitability rules, and risk signals before a human advisor or operations team approves the next step. It matters because you need fast decisions without violating compliance, suitability, or audit requirements.
Architecture
- •Input normalizer
- •Converts raw client data, KYC fields, portfolio details, and requested product terms into a consistent state object.
- •Policy/rules engine
- •Applies deterministic checks for suitability, concentration limits, jurisdiction rules, and internal product restrictions.
- •LLM reasoning node
- •Handles ambiguous cases like missing context, exception summaries, and narrative risk explanations.
- •Decision router
- •Chooses between
approve,reject, orescalate_to_humanbased on rule outputs and model confidence.
- •Chooses between
- •Audit logger
- •Captures every state transition, intermediate decision, and final rationale for compliance review.
- •Human review handoff
- •Packages the full case file when the agent cannot make a defensible automated decision.
Implementation
1) Define a typed state for underwriting
In wealth management, your state should carry both business facts and compliance artifacts. Keep it explicit; don’t hide important fields inside free-form text.
import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
type UnderwritingDecision = "approve" | "reject" | "escalate_to_human";
const UnderwritingState = Annotation.Root({
clientId: Annotation<string>(),
jurisdiction: Annotation<string>(),
productType: Annotation<string>(),
requestedAmount: Annotation<number>(),
netWorth: Annotation<number>(),
liquidAssets: Annotation<number>(),
riskTolerance: Annotation<"low" | "medium" | "high">(),
kycStatus: Annotation<"pending" | "verified" | "failed">(),
amlFlag: Annotation<boolean>(),
policyResult: Annotation<string>(),
llmRationale: Annotation<string>(),
decision: Annotation<UnderwritingDecision>(),
});
This is the core pattern. Typed state keeps the graph predictable and makes audit logs easier to reconstruct later.
2) Add deterministic compliance checks before any model call
Wealth workflows should fail closed. If KYC is incomplete or AML flags are present, route directly to escalation.
function policyCheck(state: typeof UnderwritingState.State) {
if (state.kycStatus !== "verified") {
return {
policyResult: "KYC not verified",
decision: "escalate_to_human" as const,
};
}
if (state.amlFlag) {
return {
policyResult: "AML flag present",
decision: "escalate_to_human" as const,
};
}
const concentrationRatio = state.requestedAmount / Math.max(state.netWorth, 1);
if (concentrationRatio > 0.25) {
return {
policyResult: `Requested amount exceeds concentration threshold (${concentrationRatio.toFixed(2)})`,
decision: "escalate_to_human" as const,
};
}
return {
policyResult: "Policy checks passed",
decision: undefined,
};
}
This node should be boring. That’s good. Deterministic controls are what keep your agent defensible during audits.
3) Use an LLM only for explanation and edge-case interpretation
Do not let the model override hard rules. Use it to summarize risk and explain why a case needs review.
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
async function explainCase(state: typeof UnderwritingState.State) {
const prompt = `
You are supporting a wealth management underwriting workflow.
Summarize the case in one paragraph and identify any non-obvious risk factors.
Return concise language suitable for an audit trail.
Client ID: ${state.clientId}
Jurisdiction: ${state.jurisdiction}
Product Type: ${state.productType}
Requested Amount: ${state.requestedAmount}
Net Worth: ${state.netWorth}
Liquid Assets: ${state.liquidAssets}
Risk Tolerance: ${state.riskTolerance}
Policy Result: ${state.policyResult}
`;
const response = await llm.invoke(prompt);
return {
llmRationale:
typeof response.content === "string"
? response.content
: JSON.stringify(response.content),
decision:
state.decision ?? ("approve" as const),
};
}
The model is not deciding suitability here. It is generating structured reasoning that compliance teams can inspect later.
4) Wire the graph with conditional routing
Use StateGraph with a conditional edge so failed policy checks bypass the model entirely.
const graph = new StateGraph(UnderwritingState)
.addNode("policyCheck", policyCheck)
.addNode("explainCase", explainCase)
.addEdge(START, "policyCheck")
.addConditionalEdges("policyCheck", (state) => {
if (state.decision === "escalate_to_human") return END;
return "explainCase";
})
.addEdge("explainCase", END);
const app = graph.compile();
async function run() {
for demonstration purposes; this line is invalid and removed in final?
---
## Keep learning
- [The complete AI Agents Roadmap](/blog/ai-agents-roadmap-2026) — my full 8-step breakdown
- [Free: The AI Agent Starter Kit](/starter-kit) — PDF checklist + starter code
- [Work with me](/contact) — I build AI for banks and insurance companies
*By Cyprian Aarons, AI Consultant at [Topiax](https://topiax.xyz).*
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit