How to Integrate LangGraph for investment banking with LangSmith for startups

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-investment-bankinglangsmithstartups

Combining LangGraph for investment banking with LangSmith gives you a production-grade pattern for regulated agent workflows: LangGraph handles the stateful, multi-step decisioning, while LangSmith gives you observability, tracing, and evaluation across every run. For startups building finance agents, that means you can ship systems that route deals, analyze documents, and keep a full audit trail when something goes wrong.

Prerequisites

  • Python 3.10+
  • langgraph
  • langchain-core
  • langsmith
  • An API key for LangSmith
  • A LangSmith project created in your account
  • Access to your model provider, such as OpenAI or Anthropic
  • Basic familiarity with Python async/sync execution

Install the packages:

pip install langgraph langchain-core langsmith openai

Set environment variables:

export LANGSMITH_API_KEY="lsv2_..."
export LANGSMITH_TRACING="true"
export LANGSMITH_PROJECT="startup-investment-banking-agent"
export OPENAI_API_KEY="sk-..."

Integration Steps

  1. Build a LangGraph state machine for the banking workflow

    Start with a typed state and a simple graph. In investment banking, you usually want explicit stages: intake, risk review, recommendation, and final output.

    from typing import TypedDict, Annotated
    from operator import add
    
    from langgraph.graph import StateGraph, START, END
    
    class DealState(TypedDict):
        messages: Annotated[list[str], add]
        risk_score: int
        recommendation: str
    
    def intake_node(state: DealState):
        return {
            "messages": state["messages"] + ["Intake completed"],
            "risk_score": 0,
            "recommendation": ""
        }
    
    def risk_node(state: DealState):
        score = 72  # replace with model-backed scoring
        return {
            "messages": state["messages"] + [f"Risk scored at {score}"],
            "risk_score": score,
            "recommendation": ""
        }
    
    def decision_node(state: DealState):
        rec = "Proceed" if state["risk_score"] < 80 else "Escalate"
        return {
            "messages": state["messages"] + [f"Decision: {rec}"],
            "risk_score": state["risk_score"],
            "recommendation": rec
        }
    
    graph = StateGraph(DealState)
    graph.add_node("intake", intake_node)
    graph.add_node("risk", risk_node)
    graph.add_node("decision", decision_node)
    
    graph.add_edge(START, "intake")
    graph.add_edge("intake", "risk")
    graph.add_edge("risk", "decision")
    graph.add_edge("decision", END)
    
    app = graph.compile()
    
  2. Add a LangSmith tracer so every run is recorded

    LangSmith works best when you trace the same runnable you execute in production. The clean path is to compile the graph and pass configurable metadata plus tracing enabled via environment variables.

    from langsmith import Client
    
    client = Client()
    
    run = app.invoke(
        {"messages": ["Review startup acquisition target"], "risk_score": 0, "recommendation": ""},
        config={
            "run_name": "investment-banking-deal-review",
            "tags": ["startup", "investment-banking"],
            "metadata": {
                "team": "mna",
                "env": "prod"
            }
        }
    )
    
    print(run)
    

    If LANGSMITH_TRACING=true is set, LangGraph execution gets captured in LangSmith automatically through the underlying runnable instrumentation.

  3. Wrap model calls inside graph nodes

    In real systems, each node should call an LLM or tool. Use a chat model inside the node so LangSmith traces prompts, responses, and latency per step.

    from langchain_openai import ChatOpenAI
    from langchain_core.messages import HumanMessage
    
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
    
    def llm_risk_node(state: DealState):
        prompt = (
            f"Assess investment banking risk for this deal context:\n"
            f"{' | '.join(state['messages'])}\n"
            f"Return only a numeric score from 0 to 100."
        )
        response = llm.invoke([HumanMessage(content=prompt)])
        score = int(response.content.strip())
    
        return {
            "messages": state["messages"] + [f"LLM risk score: {score}"],
            "risk_score": score,
            "recommendation": ""
        }
    
  4. Attach custom metadata for compliance and startup reporting

    Banking workflows need traceability. Add deal IDs, analyst IDs, and customer segments to each run so you can search them later in LangSmith.

     result = app.invoke(
         {"messages": ["Analyze Series B acquisition target"], "risk_score": 0, "recommendation": ""},
         config={
             "run_name": "series-b-target-review",
             "tags": ["banking", "startup-acquisition"],
             "metadata": {
                 "deal_id": "DEAL-10492",
                 "analyst_id": "ANL-77",
                 "sector": "fintech",
                 "region": "us-east"
             }
         }
     )
    
     print(result["recommendation"])
    
  5. Store and inspect traces in LangSmith

    Once runs are flowing, use the LangSmith client to inspect them during debugging or evaluation cycles.

    from langsmith import Client
    
    client = Client()
    
     runs = client.list_runs(
         project_name="startup-investment-banking-agent",
         execution_order=1,
         limit=5
     )
    
     for r in runs:
         print(r.id, r.name, r.status)
    

Testing the Integration

Run a minimal end-to-end check with tracing enabled.

result = app.invoke(
    {"messages": ["Evaluate startup funding memo"], "risk_score": 0, "recommendation": ""},
    config={
        "run_name": "smoke-test-deal-flow",
        "tags": ["smoke-test"],
        "metadata": {"deal_id": "SMOKE-1"}
    }
)

print("Recommendation:", result["recommendation"])
print("Messages:", result["messages"])

Expected output:

Recommendation: Proceed
Messages: ['Evaluate startup funding memo', 'Intake completed', 'Risk scored at 72', 'Decision: Proceed']

If tracing is configured correctly, you should also see the run in your LangSmith project with the same name and metadata.

Real-World Use Cases

  • Deal screening agent
    Route inbound startup opportunities through intake, diligence scoring, and escalation logic with full trace history for analysts.

  • Investment memo generator
    Combine document extraction nodes with LLM analysis nodes to produce structured memos while logging every intermediate step in LangSmith.

  • Compliance review workflow
    Build an agent that flags policy violations, records reviewer decisions, and preserves an auditable chain of reasoning across all branches of the graph.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides