How to Build a underwriting Agent Using LangGraph in TypeScript for investment banking
An underwriting agent helps bankers move from raw deal documents to a defensible credit or equity decision faster. In investment banking, that matters because the agent is not just summarizing information — it is extracting risk signals, checking policy constraints, and producing an audit trail that compliance and senior reviewers can trust.
Architecture
- •
Document ingestion layer
- •Pulls in pitch decks, financial statements, CIMs, term sheets, and KYC files.
- •Normalizes PDFs, spreadsheets, and text into structured inputs.
- •
State model
- •Stores the deal context across steps: issuer profile, financial metrics, risk flags, policy checks, and draft recommendation.
- •Keeps every intermediate result available for review and audit.
- •
Analysis nodes
- •Separate nodes for financial extraction, covenant analysis, sector risk review, and compliance screening.
- •Each node should do one thing well and return structured output.
- •
Decision router
- •Routes low-confidence or high-risk deals to human review.
- •Prevents the graph from auto-finalizing when data is incomplete or policy checks fail.
- •
Audit and trace layer
- •Logs prompts, model outputs, source citations, and node transitions.
- •Required for model risk management and post-trade review.
- •
Policy guardrails
- •Enforces residency rules, approved data sources, and forbidden actions like fabricating missing financials.
- •Blocks generation if the deal violates internal banking policy.
Implementation
1) Define the underwriting state
Use a typed state object so every node knows what it can read and write. In LangGraph for TypeScript, Annotation.Root gives you a clean way to define this contract.
import { Annotation } from "@langchain/langgraph";
export const UnderwritingState = Annotation.Root({
dealId: Annotation<string>(),
issuerName: Annotation<string>(),
sector: Annotation<string>(),
documents: Annotation<string[]>(),
extractedFacts: Annotation<Record<string, unknown>>(),
riskFlags: Annotation<string[]>(),
complianceFlags: Annotation<string[]>(),
recommendation: Annotation<"approve" | "reject" | "review" | null>(),
rationale: Annotation<string>(),
});
This state becomes the backbone of the workflow. For investment banking use cases, keep it explicit instead of stuffing everything into one opaque blob.
2) Build nodes for extraction, risk analysis, and compliance
Each node should return partial state updates. Use StateGraph to wire them together with deterministic control flow.
import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
async function extractFacts(state: typeof UnderwritingState.State) {
const prompt = `
Extract underwriting facts from these documents.
Return JSON with revenue, ebitda, leverageRatio, keyRisks.
Documents:
${state.documents.join("\n\n")}
`;
const response = await model.invoke(prompt);
return {
extractedFacts: { raw: response.content },
riskFlags: [],
complianceFlags: [],
};
}
async function assessRisk(state: typeof UnderwritingState.State) {
const facts = JSON.stringify(state.extractedFacts);
const prompt = `
You are reviewing an investment banking underwriting case.
Identify credit or transaction risks from these facts:
${facts}
`;
const response = await model.invoke(prompt);
return {
riskFlags: [String(response.content)],
rationale: String(response.content),
};
}
async function checkCompliance(state: typeof UnderwritingState.State) {
const flags: string[] = [];
if (!state.documents.length) flags.push("missing_source_documents");
if (!state.issuerName) flags.push("missing_issuer_name");
return { complianceFlags: flags };
}
function routeDecision(state: typeof UnderwritingState.State) {
if (state.complianceFlags.length > 0) return "humanReview";
if (state.riskFlags.length > 0) return "humanReview";
return "finalize";
}
3) Wire the graph with conditional routing
The actual underwriting flow should stop on missing data or elevated risk. addConditionalEdges is where you keep the process honest.
const graph = new StateGraph(UnderwritingState)
.addNode("extractFacts", extractFacts)
.addNode("assessRisk", assessRisk)
.addNode("checkCompliance", checkCompliance)
.addNode("humanReview", async (state) => ({
recommendation: "review" as const,
rationale: `Escalated for manual review. ${state.rationale}`,
})),
graph.addEdge(START, "extractFacts");
graph.addEdge("extractFacts", "assessRisk");
graph.addEdge("assessRisk", "checkCompliance");
graph.addConditionalEdges("checkCompliance", routeDecision, {
humanReview: "humanReview",
});
graph.addEdge("humanReview", END);
graph.addEdge("checkCompliance", END);
const app = graph.compile();
The important pattern here is that no single LLM call makes the final underwriting decision. The graph enforces separation between extraction, analysis, and escalation.
4) Run with a real deal payload
Feed the agent a bounded set of documents and keep outputs structured enough for downstream systems like CRM or deal logs.
const result = await app.invoke({
dealId: "D-10492",
issuerName: "Northwind Holdings",
sector: "Industrials",
documents: [
"Revenue grew to $420M in FY2024. EBITDA margin was 18%. Net debt / EBITDA is 4.8x.",
"CIM notes customer concentration with top three customers representing 41% of revenue.",
"KYC confirms beneficial ownership but sanctions screening is pending."
],
extractedFacts: {},
riskFlags: [],
complianceFlags: [],
recommendation: null,
rationale: "",
});
console.log(result.recommendation);
console.log(result.rationale);
For production systems in banking, you would usually replace the inline document strings with retrieved chunks from a controlled document store. The key point is that LangGraph keeps each step observable and replayable.
Production Considerations
- •
Deploy in-region
- •Keep inference endpoints and document stores in approved jurisdictions.
- •Data residency is not optional when client docs include MNPI or regulated personal data.
- •
Log every node transition
- •Capture input state hashes, output state hashes, model version, prompt template version, and timestamps.
- •This gives you an audit trail for model governance and internal review committees.
- •
Add hard guardrails before generation
- •Block recommendations when sanctions checks are incomplete or source docs are missing.
- •
Use human-in-the-loop thresholds
If leverage exceeds policy limits or confidence drops below a set threshold, route to an analyst instead of returning an automated decision.
Common Pitfalls
- •Letting the model invent missing numbers
Never ask the LLM to “fill gaps” in financial statements.
If revenue or leverage is absent from source docs,
the correct output is review, not a guessed value.
- •Collapsing all logic into one prompt
A single giant prompt makes it impossible to audit why a deal was escalated. Split extraction, risk assessment, and compliance into separate LangGraph nodes so each step can be tested independently.
- •Ignoring policy metadata
If you do not carry fields like jurisdiction, document source, and sanctions status through state, you will end up with decisions that cannot pass compliance review. Keep those fields first-class in your graph state from day one.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit