How to Integrate AutoGen for wealth management with Docker for production AI

By Cyprian AaronsUpdated 2026-04-21
autogen-for-wealth-managementdockerproduction-ai

Combining AutoGen for wealth management with Docker gives you a clean path from agent prototyping to production deployment. AutoGen handles multi-agent orchestration for tasks like portfolio analysis, compliance checks, and client reporting, while Docker gives you repeatable runtime isolation so the same agent stack runs the same way in dev, staging, and production.

For wealth management teams, that means you can package an AI assistant that reviews holdings, drafts advisor notes, and routes sensitive workflows through controlled containers instead of a developer laptop.

Prerequisites

  • Python 3.10+
  • Docker Engine installed and running
  • A working AutoGen installation for your wealth management agent package
  • Access to your model provider credentials in environment variables
  • Basic familiarity with:
    • autogen agent setup
    • Docker images and containers
    • Python virtual environments

Install the core packages:

pip install pyautogen docker

If your wealth management implementation uses an internal AutoGen extension or wrapper package, install that too.

Integration Steps

  1. Define your wealth management agents in Python

Start with a small agent graph: one assistant for analysis and one user proxy for execution control. In production, keep the tool execution boundary explicit.

import os
from autogen import AssistantAgent, UserProxyAgent

llm_config = {
    "config_list": [
        {
            "model": os.environ["OPENAI_MODEL"],
            "api_key": os.environ["OPENAI_API_KEY"],
        }
    ],
    "temperature": 0,
}

portfolio_analyst = AssistantAgent(
    name="portfolio_analyst",
    llm_config=llm_config,
    system_message=(
        "You are a wealth management analyst. "
        "Summarize portfolio risk, concentration, and rebalancing actions."
    ),
)

advisor_proxy = UserProxyAgent(
    name="advisor_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
)
  1. Create a Docker image for the agent runtime

Use Docker to freeze dependencies and make the runtime predictable. Build an image that installs your agent code and sets a stable entrypoint.

from docker import from_env

docker_client = from_env()

image_tag = "wealth-agent:prod"

dockerfile = """
FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

COPY . /app
CMD ["python", "app.py"]
"""

# Write the Dockerfile from Python if you want to automate packaging.
with open("Dockerfile", "w", encoding="utf-8") as f:
    f.write(dockerfile)

# Build the image using the Docker SDK.
image, build_logs = docker_client.images.build(path=".", tag=image_tag)
print(image.tags)

Make sure requirements.txt includes:

pyautogen
docker
  1. Run the AutoGen workflow inside a container

This is where the integration matters. Your app should start the agents, send a portfolio task, and keep execution inside Docker.

import docker

client = docker.from_env()

container = client.containers.run(
    image="wealth-agent:prod",
    detach=True,
    environment={
        "OPENAI_API_KEY": os.environ["OPENAI_API_KEY"],
        "OPENAI_MODEL": os.environ["OPENAI_MODEL"],
    },
    network_mode="bridge",
)

print(container.id)

If you want to run a specific Python command instead of the image default command, pass command=:

container = client.containers.run(
    image="wealth-agent:prod",
    command=["python", "-c", "print('agent container ready')"],
    detach=True,
)
  1. Wire message passing between the advisor and analyst agents

Use AutoGen’s initiate_chat() method to kick off a controlled conversation. In production AI systems, this is where you inject prompts like portfolio reviews or suitability checks.

task = """
Review this portfolio:
- 45% US large cap equities
- 20% international equities
- 25% investment grade bonds
- 10% cash

Return:
1. Concentration risk summary
2. Rebalancing recommendation
3. Advisor-ready client note
"""

chat_result = advisor_proxy.initiate_chat(
    recipient=portfolio_analyst,
    message=task,
)

print(chat_result.summary)

If your workflow needs multiple agents, add a compliance reviewer or tax-aware planner using another AssistantAgent instance and route outputs through them before returning a client-facing response.

  1. Persist outputs from Docker back to your application

In production you usually want structured output stored outside the container so it survives restarts. Use Docker volumes or copy files out of the container after execution.

import json

result_path = "/tmp/portfolio_result.json"

with open(result_path, "w", encoding="utf-8") as f:
    json.dump(
        {
            "summary": chat_result.summary,
            "messages": chat_result.chat_history,
        },
        f,
        indent=2,
    )

container.put_archive("/app", open(result_path, "rb").read())
container.stop()
container.remove()

For most teams, I recommend writing results to object storage or a database from the host process instead of relying on container-local state.

Testing the Integration

Run a simple smoke test that confirms both pieces work together: Docker starts the container, AutoGen produces output, and your host process can read it.

import os
from autogen import AssistantAgent, UserProxyAgent

def test_portfolio_agent():
    llm_config = {
        "config_list": [
            {
                "model": os.environ["OPENAI_MODEL"],
                "api_key": os.environ["OPENAI_API_KEY"],
            }
        ],
        "temperature": 0,
    }

    analyst = AssistantAgent("analyst", llm_config=llm_config)
    proxy = UserProxyAgent("proxy", human_input_mode="NEVER")

    result = proxy.initiate_chat(
        analyst,
        message="Analyze a balanced portfolio with 60/40 equity/bond allocation.",
    )

    print(result.summary)

test_portfolio_agent()

Expected output:

Portfolio analysis completed.
Risk level: moderate.
No major concentration issues detected.
Suggested action: review equity exposure quarterly.

If you run this inside your containerized app and get deterministic output without interactive prompts, your integration is wired correctly.

Real-World Use Cases

  • Advisor copilot

    • Generate portfolio summaries, draft client emails, and prepare meeting notes inside isolated containers.
  • Compliance review pipeline

    • Route every generated recommendation through a second AutoGen agent that checks suitability language before release.
  • Batch reporting jobs

    • Run nightly containerized agents that summarize household portfolios and push structured reports into your CRM or data warehouse.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides