How to Integrate LangGraph for investment banking with LangSmith for multi-agent systems
Why this integration matters
If you’re building investment banking workflows with multiple agents, LangGraph gives you the control plane: routing, state, retries, and human checkpoints. LangSmith gives you observability: traces, evaluations, and debugging across the whole agent graph.
Together, they let you run multi-agent systems for tasks like deal screening, pitchbook drafting, or compliance review with auditability. That matters when every step needs to be explainable, reproducible, and measurable.
Prerequisites
- •Python 3.10+
- •
langgraph - •
langchain - •
langsmith - •An LLM provider key set in your environment
- •LangSmith account with:
- •
LANGSMITH_API_KEY - •
LANGSMITH_TRACING=true - •
LANGSMITH_PROJECTset
- •
- •Access to a model that can handle structured banking workflows
- •Basic familiarity with:
- •LangGraph
StateGraph - •LangChain chat models
- •Python environment variables
- •LangGraph
Install the packages:
pip install langgraph langchain langchain-openai langsmith
Set environment variables:
export OPENAI_API_KEY="your-key"
export LANGSMITH_API_KEY="your-langsmith-key"
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="investment-banking-agents"
Integration Steps
1) Define the shared graph state
In investment banking systems, every agent needs the same working context: deal name, sector, risk flags, and outputs from prior nodes. Keep this in a typed state object so your graph remains deterministic.
from typing import TypedDict, Annotated, List
import operator
class BankingState(TypedDict):
deal_name: str
sector: str
memo: str
risks: List[str]
recommendation: str
messages: Annotated[list, operator.add]
This state becomes the contract between agents. If one node enriches the sector analysis and another performs compliance review, both read and write against the same structure.
2) Build LangGraph nodes for banking tasks
Use separate nodes for research, risk review, and final recommendation. In a real system these could call internal data sources, but here we’ll wire them to an LLM so you can see the integration pattern.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def research_node(state: BankingState):
prompt = f"""
You are an investment banking analyst.
Deal: {state['deal_name']}
Sector: {state['sector']}
Write a concise market summary for an internal memo.
"""
result = llm.invoke(prompt)
return {
"memo": result.content,
"messages": [result]
}
def risk_node(state: BankingState):
prompt = f"""
You are a banking risk reviewer.
Given this memo:
{state['memo']}
List material risks as short bullets.
"""
result = llm.invoke(prompt)
risks = [line.strip("- ").strip() for line in result.content.splitlines() if line.strip()]
return {
"risks": risks,
"messages": [result]
}
def recommendation_node(state: BankingState):
prompt = f"""
You are a senior banker.
Deal: {state['deal_name']}
Memo: {state['memo']}
Risks: {state['risks']}
Return one recommendation sentence.
"""
result = llm.invoke(prompt)
return {
"recommendation": result.content,
"messages": [result]
}
This is where LangGraph fits well for investment banking use cases. You keep each step isolated and auditable instead of hiding everything inside one giant prompt.
3) Assemble the multi-agent graph
Now connect the nodes with StateGraph. This gives you explicit execution flow and makes it easy to expand later with approvals or conditional routing.
from langgraph.graph import StateGraph, START, END
graph = StateGraph(BankingState)
graph.add_node("research", research_node)
graph.add_node("risk", risk_node)
graph.add_node("recommendation", recommendation_node)
graph.add_edge(START, "research")
graph.add_edge("research", "risk")
graph.add_edge("risk", "recommendation")
graph.add_edge("recommendation", END)
app = graph.compile()
At this point you have a runnable multi-agent pipeline. The important part is that each node is traceable as a separate unit once LangSmith tracing is enabled.
4) Enable LangSmith tracing on every run
LangSmith works through environment-based tracing plus optional project configuration. If tracing is enabled before execution, LangGraph node calls are captured automatically.
import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "investment-banking-agents"
input_state = {
"deal_name": "Project Atlas",
"sector": "Fintech",
"memo": "",
"risks": [],
"recommendation": "",
"messages": []
}
result = app.invoke(input_state)
print(result["recommendation"])
For production teams, this is the main value: every node execution shows up in LangSmith with inputs, outputs, latency, and token usage. That gives you a real audit trail for banker-facing workflows.
5) Add LangSmith evaluation for regression checks
Once your graph works, use LangSmith to test whether changes break quality. A common pattern is to run the same deal scenarios through the graph and compare outputs over time.
from langsmith import Client
client = Client()
# Example dataset-style check
example_input = {
"deal_name": "Project Atlas",
"sector": "Fintech",
"memo": "",
"risks": [],
"recommendation": "",
"messages": []
}
run_output = app.invoke(example_input)
client.create_feedback(
run_id="manual-run-id-placeholder",
key="banking_recommendation_quality",
score=1.0,
comment="Recommendation is concise and aligned with memo."
)
In practice you’d attach feedback to actual traced runs from LangSmith UI or API callbacks. The point is to turn subjective banker review into repeatable evaluation.
Testing the Integration
Run this end-to-end check to confirm both tools are connected correctly.
test_input = {
"deal_name": "Project Atlas",
"sector": "Fintech",
"memo": "",
"risks": [],
"recommendation": "",
"messages": []
}
output = app.invoke(test_input)
print("Memo:", output["memo"])
print("Risks:", output["risks"])
print("Recommendation:", output["recommendation"])
Expected output:
Memo: A concise market summary about fintech...
Risks: ['Regulatory pressure', 'Valuation sensitivity', 'Execution risk']
Recommendation: Proceed with diligence before advancing to management presentation.
If tracing is configured correctly, you should also see a new run in LangSmith showing three distinct node executions.
Real-World Use Cases
- •
Deal screening workflow
- •One agent summarizes company data.
- •Another flags regulatory or valuation risks.
- •A final agent drafts an IC-ready recommendation.
- •
Pitchbook generation
- •Use one node for market comps.
- •Another for narrative drafting.
- •A review node checks consistency before export.
- •
Compliance-aware analyst copilot
- •Route sensitive questions through approval nodes.
- •Trace every step in LangSmith for auditability.
- •Use evaluations to catch hallucinated financial claims early.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit