AutoGen Tutorial (Python): building prompt templates for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogenbuilding-prompt-templates-for-intermediate-developerspython

This tutorial shows you how to build reusable prompt templates in AutoGen with Python, then wire them into agents so your prompts stay consistent across tasks. You need this when your team is past ad hoc prompting and wants repeatable, versioned instructions for research, review, or generation workflows.

What You'll Need

  • Python 3.10+
  • pyautogen installed
  • An OpenAI API key
  • Basic familiarity with AutoGen AssistantAgent and UserProxyAgent
  • A terminal and a code editor
  • Optional: python-dotenv if you want to load secrets from a .env file

Step-by-Step

  1. Start by installing the dependencies and setting your API key. For local development, I prefer environment variables because they keep secrets out of source control.
pip install pyautogen python-dotenv
export OPENAI_API_KEY="your-api-key-here"
  1. Define a prompt template as a Python format string. The key idea is to separate the instruction structure from the runtime values so you can reuse the same template across multiple agents and tasks.
from textwrap import dedent

REVIEW_PROMPT = dedent("""
You are a senior {domain} reviewer.
Review the following {artifact_type} for:
- correctness
- edge cases
- missing assumptions
- production readiness

Return:
1. Summary
2. Issues found
3. Recommended changes

Artifact:
{artifact}
""").strip()
  1. Fill the template with task-specific values before sending it to an agent. This keeps the prompt readable and makes it easy to test variations without editing the instruction text itself.
sample_artifact = """
def total(items):
    return sum(items)
"""

filled_prompt = REVIEW_PROMPT.format(
    domain="Python",
    artifact_type="code snippet",
    artifact=sample_artifact,
)

print(filled_prompt)
  1. Create an AutoGen assistant agent and send the formatted prompt to it. This example uses a single assistant agent because it is enough to validate your template pattern before you move on to multi-agent flows.
import os
from autogen import AssistantAgent

config_list = [
    {
        "model": "gpt-4o-mini",
        "api_key": os.environ["OPENAI_API_KEY"],
    }
]

assistant = AssistantAgent(
    name="reviewer",
    llm_config={"config_list": config_list},
)

response = assistant.generate_reply(messages=[{"role": "user", "content": filled_prompt}])
print(response)
  1. If you want stronger structure, wrap your templates in a small helper function. This makes it easier to standardize prompts across teams and avoids copy-pasting string formatting logic into every script.
from dataclasses import dataclass

@dataclass(frozen=True)
class PromptTemplate:
    template: str

    def render(self, **kwargs) -> str:
        return self.template.format(**kwargs)

code_review_template = PromptTemplate(REVIEW_PROMPT)

prompt = code_review_template.render(
    domain="Python",
    artifact_type="function",
    artifact="def add(a, b):\n    return a + b",
)

print(prompt)
  1. Use a user proxy only when you need tool execution or human-in-the-loop behavior. For pure prompt templating, the assistant alone is enough; for real workflows, pair the same template with a proxy that can execute code or confirm actions.
from autogen import UserProxyAgent

user_proxy = UserProxyAgent(
    name="user",
    human_input_mode="NEVER",
    code_execution_config=False,
)

chat_result = user_proxy.initiate_chat(
    assistant,
    message=prompt,
)
print(chat_result.summary)

Testing It

Run the script once with a simple artifact and confirm that the rendered prompt contains all substituted values correctly. Then change one field at a time, like domain or artifact_type, and verify that the output changes without touching the base template.

If you are using generate_reply, make sure you get a structured response that matches your requested format: summary, issues, and recommended changes. For production use, test empty inputs too; prompt templates fail quietly when placeholders are missing or poorly validated.

Next Steps

  • Add input validation with Pydantic so required template variables fail fast
  • Store templates in versioned files instead of inline strings
  • Combine this pattern with multi-agent review chains for drafting, critique, and final approval

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides