CrewAI Tutorial (Python): building prompt templates for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaibuilding-prompt-templates-for-advanced-developerspython

This tutorial shows you how to build reusable prompt templates in CrewAI for Python agents, with variable injection, role-specific instructions, and output constraints. You need this when your agent prompts are no longer one-offs and you want a maintainable pattern for finance, insurance, or other production workflows.

What You'll Need

  • Python 3.10+
  • crewai
  • crewai-tools if you plan to add tools later
  • An OpenAI API key set as OPENAI_API_KEY
  • A working virtual environment
  • Basic familiarity with Agent, Task, and Crew in CrewAI

Install the package:

pip install crewai

Step-by-Step

  1. Start by defining a prompt template as a normal Python string. The important part is that it includes placeholders for variables you want to inject at runtime, like customer type, policy type, or tone.
from textwrap import dedent

PROMPT_TEMPLATE = dedent("""
You are a senior assistant for {domain} operations.

Context:
- User type: {user_type}
- Objective: {objective}

Instructions:
1. Ask for missing details only if required.
2. Keep the answer concise and actionable.
3. Use {tone} language.
4. Return the result in {format}.
""").strip()
  1. Next, wrap that template in a helper function so your prompt generation stays consistent across tasks. This is the pattern I use when multiple agents need slightly different instructions but the same structure.
def build_prompt(domain: str, user_type: str, objective: str, tone: str = "clear", format: str = "bullet points") -> str:
    return PROMPT_TEMPLATE.format(
        domain=domain,
        user_type=user_type,
        objective=objective,
        tone=tone,
        format=format,
    )

prompt = build_prompt(
    domain="insurance claims",
    user_type="underwriter",
    objective="summarize claim risk factors",
)
print(prompt)
  1. Now connect the generated prompt to a CrewAI agent and task. The trick is to keep the agent role stable while letting the task description carry dynamic context from your template.
from crewai import Agent, Task

agent = Agent(
    role="Claims Analysis Assistant",
    goal="Produce accurate operational summaries from structured claim data",
    backstory="You work with underwriting teams and write concise decision support notes.",
    verbose=True,
)

task = Task(
    description=build_prompt(
        domain="insurance claims",
        user_type="underwriter",
        objective="summarize claim risk factors",
        tone="professional",
        format="JSON",
    ),
    expected_output="A valid JSON object with risk summary fields.",
    agent=agent,
)
  1. If you need stronger control, separate system-level guidance from task-level variables. This makes it easier to reuse one agent across many workflows without rewriting its behavior every time.
from crewai import Crew, Process

crew = Crew(
    agents=[agent],
    tasks=[task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print(result)
  1. For advanced developers, add validation before sending prompts to the model. In production systems, I always check that required variables exist so bad input fails early instead of producing garbage output.
REQUIRED_FIELDS = {"domain", "user_type", "objective"}

def validate_template_inputs(data: dict) -> None:
    missing = REQUIRED_FIELDS - data.keys()
    if missing:
        raise ValueError(f"Missing template fields: {sorted(missing)}")

payload = {
    "domain": "insurance claims",
    "user_type": "underwriter",
    "objective": "summarize claim risk factors",
}

validate_template_inputs(payload)
safe_prompt = build_prompt(**payload)
print(safe_prompt)

Testing It

Run the script end to end and confirm that the rendered prompt contains your injected values like insurance claims, underwriter, and JSON. Then check that CrewAI returns an output matching your expected format rather than a generic free-form response.

If you are using structured outputs downstream, verify the model’s response can be parsed reliably by your application code. Also test at least one failure case by removing a required field and confirming your validation raises an error before any API call happens.

Next Steps

  • Add tool calling with crewai-tools so templates can include live data retrieval steps
  • Move templates into versioned files for auditability and prompt governance
  • Add schema validation with Pydantic for structured outputs from agents

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides