How to Integrate FastAPI for wealth management with LangChain for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
fastapi-for-wealth-managementlangchainmulti-agent-systems

FastAPI gives you a clean way to expose wealth management workflows as HTTP services. LangChain gives you the orchestration layer to route requests across multiple agents, tools, and retrieval steps. Put them together, and you can build systems that answer client questions, fetch portfolio data, generate recommendations, and hand off specialized tasks without turning your API into a ball of if/else logic.

Prerequisites

  • Python 3.10+
  • fastapi
  • uvicorn
  • langchain
  • langchain-openai or another LangChain chat model provider
  • pydantic
  • Access to your wealth management backend:
    • portfolio service
    • client profile service
    • market data service
  • An API key for your LLM provider
  • A running FastAPI app with authenticated endpoints

Install the basics:

pip install fastapi uvicorn langchain langchain-openai pydantic httpx

Integration Steps

  1. Expose wealth management capabilities through FastAPI

Start by wrapping your domain functions in API endpoints. Keep these endpoints narrow: one for portfolio lookup, one for risk profile, one for recommendation generation.

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List

app = FastAPI(title="Wealth Management API")

class PortfolioResponse(BaseModel):
    client_id: str
    holdings: List[dict]
    total_value: float

@app.get("/clients/{client_id}/portfolio", response_model=PortfolioResponse)
def get_portfolio(client_id: str):
    # Replace with real DB/service call
    if client_id != "client_123":
        raise HTTPException(status_code=404, detail="Client not found")

    return PortfolioResponse(
        client_id=client_id,
        holdings=[
            {"symbol": "AAPL", "quantity": 20, "value": 4200},
            {"symbol": "MSFT", "quantity": 10, "value": 3900},
        ],
        total_value=8100,
    )
  1. Wrap FastAPI endpoints as LangChain tools

LangChain agents need tools they can call. The clean pattern is to create a small HTTP client wrapper around your FastAPI service and expose it as a tool.

import httpx
from langchain_core.tools import tool

BASE_URL = "http://localhost:8000"

@tool
def fetch_client_portfolio(client_id: str) -> str:
    """Fetch a client's portfolio from the wealth management API."""
    response = httpx.get(f"{BASE_URL}/clients/{client_id}/portfolio", timeout=10)
    response.raise_for_status()
    return response.json().__repr__()

If you want multiple agent roles, define separate tools for each domain concern:

@tool
def fetch_risk_profile(client_id: str) -> str:
    """Fetch a client's risk profile."""
    response = httpx.get(f"{BASE_URL}/clients/{client_id}/risk-profile", timeout=10)
    response.raise_for_status()
    return response.json().__repr__()
  1. Create specialized LangChain agents for different tasks

For multi-agent systems, don’t force one agent to do everything. Use separate agents for portfolio analysis, compliance checks, and client communication.

from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

portfolio_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a portfolio analysis agent for wealth management."),
    ("human", "{input}")
])

portfolio_agent = create_tool_calling_agent(
    llm=llm,
    tools=[fetch_client_portfolio],
    prompt=portfolio_prompt,
)

portfolio_executor = AgentExecutor(
    agent=portfolio_agent,
    tools=[fetch_client_portfolio],
    verbose=True,
)

Add a second agent for compliance-style review:

compliance_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a compliance review agent. Flag risky or unsuitable recommendations."),
    ("human", "{input}")
])

compliance_agent = create_tool_calling_agent(
    llm=llm,
    tools=[],
    prompt=compliance_prompt,
)

compliance_executor = AgentExecutor(
    agent=compliance_agent,
    tools=[],
    verbose=True,
)
  1. Orchestrate the agents from a FastAPI endpoint

This is where the integration becomes useful. Your FastAPI endpoint becomes the entry point; LangChain handles which agent runs first and how results flow between them.

from fastapi import Body

@app.post("/agents/recommendation")
async def recommend_allocation(payload: dict = Body(...)):
    client_id = payload["client_id"]
    
    portfolio_result = portfolio_executor.invoke({
        "input": f"Analyze the portfolio for client {client_id}."
    })

    compliance_result = compliance_executor.invoke({
        "input": f"Review this recommendation context: {portfolio_result['output']}"
    })

    return {
        "client_id": client_id,
        "portfolio_analysis": portfolio_result["output"],
        "compliance_review": compliance_result["output"],
    }

If you want stronger control over routing between agents, use a supervisor pattern in your own service layer and keep each agent narrowly scoped.

  1. Run the app and wire the LangChain runtime correctly

Make sure your FastAPI app is running before invoking the agent tools over HTTP.

uvicorn main:app --reload --port 8000

Set your LLM credentials in the environment:

export OPENAI_API_KEY="your-key"

At this point, LangChain can call your FastAPI-backed tools during execution, and your API can expose higher-level orchestration endpoints to clients or internal services.

Testing the Integration

Use a simple request against the orchestration endpoint to verify both layers work together.

import httpx

response = httpx.post(
    "http://localhost:8000/agents/recommendation",
    json={"client_id": "client_123"},
    timeout=30,
)

print(response.status_code)
print(response.json())

Expected output:

200
{
  "client_id": "client_123",
  "portfolio_analysis": "...",
  "compliance_review": "..."
}

If this fails, check these first:

  • FastAPI server is running on the expected port
  • The tool URLs match your deployed routes
  • Your OpenAI key is loaded in the environment
  • Tool timeouts are long enough for LLM + HTTP calls

Real-World Use Cases

  • Client advisory assistant

    • A front-office app asks one endpoint for portfolio context.
    • LangChain routes to an analysis agent and then a compliance agent before returning advice.
  • Advisor workflow automation

    • One agent pulls holdings from FastAPI.
    • Another drafts meeting notes or follow-up actions.
    • A third validates suitability against policy rules.
  • Multi-agent research desk

    • One agent gathers market data.
    • Another compares it against client exposure.
    • A final agent generates concise recommendations for advisors to review.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides