How to Build a underwriting Agent Using LangGraph in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
underwritinglanggraphtypescriptbanking

An underwriting agent in banking takes a loan or credit application, gathers the relevant facts, checks policy rules, evaluates risk signals, and produces a decision recommendation with an audit trail. It matters because underwriting is where banks control credit loss, compliance exposure, and turnaround time; if the workflow is inconsistent or opaque, you get bad decisions and bad regulators.

Architecture

A production underwriting agent in LangGraph needs these components:

  • Input normalization layer

    • Converts raw application payloads into a typed state object.
    • Validates required fields like income, liabilities, collateral, jurisdiction, and product type.
  • Policy and compliance checker

    • Applies hard rules before any model call.
    • Blocks prohibited decisions, missing disclosures, or unsupported geographies.
  • Risk assessment node

    • Uses an LLM or deterministic scoring service to summarize applicant risk.
    • Produces a structured output: risk band, key drivers, and confidence.
  • Decision router

    • Routes to approve, decline, or manual review.
    • Keeps human-in-the-loop for edge cases and policy exceptions.
  • Audit logger

    • Persists every node input/output with timestamps and trace IDs.
    • Required for model governance, dispute handling, and regulator review.
  • Redaction and residency controls

    • Removes PII from prompts where possible.
    • Ensures data stays in the correct region if you have residency constraints.

Implementation

1) Define the graph state and typed outputs

Use a typed state object so every node has a clear contract. In banking workflows, this prevents “stringly typed” mistakes that are painful to audit later.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { z } from "zod";

const RiskSchema = z.object({
  score: z.number().min(0).max(100),
  band: z.enum(["low", "medium", "high"]),
  reasons: z.array(z.string()).min(1),
});

type UnderwritingState = {
  applicantId: string;
  jurisdiction: string;
  income: number;
  liabilities: number;
  requestedAmount: number;
  policyPass?: boolean;
  risk?: z.infer<typeof RiskSchema>;
  decision?: "approve" | "decline" | "manual_review";
  auditTrail: Array<{ step: string; detail: unknown }>;
};

const State = Annotation.Root({
  applicantId: Annotation<string>(),
  jurisdiction: Annotation<string>(),
  income: Annotation<number>(),
  liabilities: Annotation<number>(),
  requestedAmount: Annotation<number>(),
  policyPass: Annotation<boolean | undefined>(),
  risk: Annotation<z.infer<typeof RiskSchema> | undefined>(),
  decision: Annotation<"approve" | "decline" | "manual_review" | undefined>(),
  auditTrail: Annotation<Array<{ step: string; detail: unknown }>>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

2) Add policy checks before any model call

Hard rules should run first. If the application fails compliance checks, do not waste tokens on model inference.

const policyCheck = async (state: typeof State.State) => {
  const debtToIncome = state.liabilities / Math.max(state.income, 1);
  const pass =
    state.jurisdiction === "US" &&
    state.requestedAmount <= state.income * 5 &&
    debtToIncome < 0.6;

  return {
    policyPass: pass,
    auditTrail: [
      {
        step: "policyCheck",
        detail: { debtToIncome, jurisdiction: state.jurisdiction, pass },
      },
    ],
    decision: pass ? undefined : "decline",
  };
};

3) Call the graph only when policy passes

Use StateGraph with conditional routing. This pattern keeps the workflow explicit and easy to explain to auditors.

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0 });

const riskAssessment = async (state: typeof State.State) => {
	const prompt = `
Assess underwriting risk for this banking application.
Return JSON with score (0-100), band (low|medium|high), reasons[].

Applicant:
- income=${state.income}
- liabilities=${state.liabilities}
- requestedAmount=${state.requestedAmount}
- jurisdiction=${state.jurisdiction}
`;

	const response = await llm.invoke(prompt);
	const parsed = RiskSchema.parse(JSON.parse(response.content as string));

	return {
		risk: parsed,
		auditTrail: [
			{
				step: "riskAssessment",
				detail: parsed,
			},
		],
	};
};

const decide = async (state: typeof State.State) => {
	if (state.policyPass === false) return { decision: "decline" };

	if (!state.risk) return { decision: "manual_review" };

	if (state.risk.band === "low") return { decision: "approve" };
	if (state.risk.band === "high") return { decision: "decline" };
	return { decision: "manual_review" };
};

const graph = new StateGraph(State)
	.addNode("policyCheck", policyCheck)
	.addNode("riskAssessment", riskAssessment)
	.addNode("decide", decide)
	.addEdge(START, "policyCheck")
	.addConditionalEdges("policyCheck", (state) =>
		state.policyPass ? "riskAssessment" : END
	)
	.addEdge("riskAssessment", "decide")
	.addEdge("decide", END)
	.compile();

4) Invoke it with an auditable request payload

Keep the input minimal and log the final output separately in your application layer. That gives you clean separation between orchestration logic and persistence.

const result = await graph.invoke({
	applicantId: "app_123",
	jurisdiction: "US",
	income: 120000,
	liabilities: 30000,
	requestedAmount: 200000,
});

console.log({
	applicantId: result.applicantId,
	policyPass: result.policyPass,
	riskBand?: result.risk?.band,
	decisionDecidedByAgent?: result.decision,
	auditTrailLength?: result.auditTrail.length,
});

Production Considerations

  • Deploy in-region

    Keep application data and model calls inside approved regions. Banking teams care about data residency as much as latency.

  • Persist full trace context

    Store applicantId, graph version, node outputs, timestamps, and final decision in immutable logs. You need this for model governance and adverse action reviews.

  • Add guardrails before LLM invocation

    Redact SSNs, account numbers, and free-text PII before prompts. Also enforce schema validation on all model outputs with zod or equivalent.

  • Separate manual review from auto-decision

    High-risk cases should route to a human underwriter. Do not let the agent “self approve” borderline applications without a review queue.

Common Pitfalls

  1. Letting the LLM make policy decisions

    Policy should be deterministic code. If you ask the model whether a loan violates bank rules, you will eventually get inconsistent outcomes.

  2. Skipping structured output validation

    Never trust raw text from the model. Parse into a schema like RiskSchema before using it in routing logic.

  3. Not versioning the graph

    Underwriting logic changes over time. Tag each deployed graph build so you can reproduce decisions during audits or customer disputes.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides