machine learning Skills for underwriter in investment banking: What to Learn in 2026
AI is changing underwriting in investment banking in a very specific way: it is compressing the time needed to screen issuers, summarize disclosures, compare comps, and flag risk factors. The underwriter who can read a model output, challenge it with market context, and turn that into a defensible credit or issuance decision will stay valuable.
The 5 Skills That Matter Most
- •
Financial statement analysis with Python
You do not need to become a quant, but you do need to automate the repetitive parts of issuer analysis. Learn how to pull financials into pandas, normalize line items across filings, and calculate leverage, coverage, liquidity, and covenant headroom quickly.
For an underwriter, this matters because AI tools can surface patterns, but they still need clean inputs and human judgment. A simple script that parses 10-Ks or earnings releases can save hours during deal screening.
- •
Prompting and document interrogation
Underwriting lives in long documents: offering memoranda, credit agreements, rating agency notes, MD&A sections, and investor decks. You should learn how to ask an LLM for structured extraction: debt maturities, change-of-control clauses, restricted payment baskets, litigation mentions, or risk factor changes.
The skill is not “chatting with AI.” It is building repeatable prompts that produce reliable outputs you can review fast. In practice, this helps you triage deal docs before the first committee meeting.
- •
Risk classification and feature engineering
Underwriters already think in signals: sector stress, refinancing risk, sponsor quality, leverage trend, EBITDA adjustments, and disclosure quality. Machine learning just formalizes that thinking into features a model can use.
Learn how to turn underwriting judgment into variables. If you can define a feature set for default risk or spread widening risk, you can work with data science teams instead of waiting on them.
- •
Model evaluation and bias checking
A model that looks accurate on paper can still fail badly on new deals or stressed markets. Learn basic concepts like precision/recall, calibration, overfitting, leakage, and backtesting against out-of-sample periods.
This matters because underwriters are ultimately accountable for decisions that affect capital allocation. If an AI tool recommends “low risk” on a highly levered sponsor-backed issuer with weak disclosure quality, you need to know when the model is lying.
- •
Workflow automation with SQL and APIs
The fastest win for an underwriter is not building a new model; it is reducing manual workflow friction. Learn SQL to query internal deal data and basic API usage to connect market data sources, filing databases, or document repositories.
This skill helps you build screening pipelines: pull recent comps, compare covenant terms across deals, or track issuance trends by sector. In six to eight weeks of focused work, you can become the person who turns messy inputs into usable underwriting intelligence.
Where to Learn
- •
Coursera — Machine Learning Specialization by Andrew Ng
Good for understanding supervised learning basics without getting buried in math. Focus on regression/classification concepts so you can evaluate underwriting models intelligently.
- •
DataCamp — Python for Finance
Useful for pandas workflows tied to financial statements and time-series analysis. Pair it with your own issuer data so the exercises map to real underwriting tasks.
- •
Kaggle Learn — Pandas and Intro to Machine Learning
Short modules that are easy to finish in evenings over 2–3 weeks. Best used for building speed on data wrangling and model evaluation basics.
- •
Book: Machine Learning for Asset Managers by Marcos López de Prado
Not written for underwriters specifically, but excellent for understanding why financial ML fails in production. Read it for model validation discipline more than algorithm recipes.
- •
OpenAI API docs + Microsoft Excel Power Query
Use OpenAI docs to learn document extraction workflows and Power Query to clean deal data without waiting on engineering support. Together they cover the practical automation layer most underwriting teams actually need.
How to Prove It
- •
Build a covenant extraction tool
Take 10 real credit agreements or offering memoranda and use Python plus an LLM prompt chain to extract key terms: leverage test levels, restricted payments language, maturity dates, and change-of-control clauses. Show accuracy against manual review.
- •
Create an issuer risk scorecard
Use public filings from one sector like REITs or BDCs and build a simple scoring model based on leverage trend, liquidity runway, margin compression, and refinancing concentration. Present it as an underwriting aid rather than a prediction engine.
- •
Automate peer comparison
Pull data from annual reports or public sources into a spreadsheet or Python notebook that compares leverage ratios, interest coverage, FCF conversion, and maturity walls across peers. This demonstrates that you can replace manual comp work with something faster and cleaner.
- •
Backtest spread movement vs disclosure signals
Collect several quarters of earnings releases and bond spread movements for one industry. Test whether changes in guidance language or risk-factor wording correlate with widening spreads; this shows real underwriting thinking plus basic ML discipline.
What NOT to Learn
- •
Deep neural network theory
You do not need transformer architecture details unless you plan to join a model-building team full time. For underwriting relevance in 2026, applied extraction and evaluation matter more than research-level ML math.
- •
Generic “AI strategy” content
Slides about transformation do not help when you are reviewing debt terms at midnight before committee. Skip broad management content unless it connects directly to document review or decision support.
- •
No-code hype tools without auditability
If you cannot explain how outputs were generated or reproduce them later, they are risky in banking workflows. Stick to tools where versioning, logs, and human review are possible.
A realistic timeline is eight weeks if you stay focused:
- •Weeks 1–2: Python basics + pandas
- •Weeks 3–4: Prompting for document extraction
- •Weeks 5–6: SQL + simple model evaluation
- •Weeks 7–8: Build one underwriting project end-to-end
If you are an underwriter in investment banking right now, your goal is not to become the AI person on the team. Your goal is to become the underwriter who uses AI better than everyone else while still making decisions that hold up under scrutiny.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit