CrewAI Tutorial (Python): building prompt templates for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
crewaibuilding-prompt-templates-for-intermediate-developerspython

This tutorial shows you how to build reusable prompt templates in CrewAI with Python, then wire them into agents and tasks without hardcoding brittle instructions everywhere. You need this when your prompts start growing past one-off strings and you want consistent behavior, easier maintenance, and cleaner handoffs between agents.

What You'll Need

  • Python 3.10+
  • crewai
  • python-dotenv
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with:
    • Agent
    • Task
    • Crew
    • Process

Step-by-Step

  1. Create a small project and install the dependencies. Keep the setup minimal so you can focus on the template pattern instead of framework plumbing.
mkdir crewai-prompt-templates
cd crewai-prompt-templates
python -m venv .venv
source .venv/bin/activate
pip install crewai python-dotenv
  1. Put your API key in a .env file and load it in Python. This keeps secrets out of source control and makes local runs predictable.
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_api_key_here
EOF
  1. Define prompt templates as plain Python functions. For intermediate work, I prefer functions over raw strings because they make variables explicit and keep formatting logic in one place.
from dotenv import load_dotenv

load_dotenv()

def research_prompt(topic: str, audience: str) -> str:
    return f"""
You are a senior research analyst.
Write a concise brief about: {topic}

Audience: {audience}
Requirements:
- Focus on practical implications
- Use bullet points for key findings
- Avoid generic advice
""".strip()

def review_prompt(content: str) -> str:
    return f"""
You are a strict editor.
Review the content below for clarity, accuracy, and structure.

Content:
{content}

Return:
- Issues found
- Suggested fixes
- A rewritten version if needed
""".strip()
  1. Use the template output inside CrewAI agents and tasks. The important part is that the template generates the task or agent instruction text before CrewAI runs, so you can reuse it across many workflows.
from crewai import Agent, Task, Crew, Process

researcher = Agent(
    role="Researcher",
    goal="Produce practical research briefs",
    backstory="You turn messy topics into structured notes.",
    verbose=True,
)

editor = Agent(
    role="Editor",
    goal="Improve clarity and correctness",
    backstory="You review technical writing with precision.",
    verbose=True,
)

topic = "prompt templates for CrewAI"
audience = "Python developers building internal AI assistants"

research_task = Task(
    description=research_prompt(topic, audience),
    expected_output="A structured research brief with actionable points.",
    agent=researcher,
)

review_task = Task(
    description=review_prompt("{research_output}"),
    expected_output="A review report and improved draft.",
    agent=editor,
)
  1. Run the crew with a simple sequential process. For production-style workflows, keep each task narrow so the template does one job well instead of trying to do everything at once.
crew = Crew(
    agents=[researcher, editor],
    tasks=[research_task, review_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print(result)
  1. If you want stronger reuse, build a tiny template registry. This is useful when multiple teams share prompts and you want one place to manage naming, variables, and formatting rules.
TEMPLATES = {
    "research": research_prompt,
    "review": review_prompt,
}

def build_task(template_name: str, agent: Agent, **kwargs) -> Task:
    return Task(
        description=TEMPLATES[template_name](**kwargs),
        expected_output="A useful output matching the template intent.",
        agent=agent,
    )

task = build_task(
    "research",
    researcher,
    topic="insurance claim triage",
    audience="operations engineers",
)
print(task.description)

Testing It

Run the script and confirm two things: first, the generated prompt text matches your variables; second, CrewAI completes both tasks without errors. If output looks too vague, tighten the template by adding constraints like format requirements, length limits, or domain-specific context.

A good test is to change topic and audience values and verify that only the relevant parts of the prompt change. That tells you your templates are parameterized correctly instead of being hardcoded blobs.

If you use this pattern in a real app, add unit tests around the template functions themselves. You do not need to mock CrewAI for that; just assert that the returned string contains the right instructions and inserted values.

Next Steps

  • Add Jinja2-style templating if your prompts need conditional sections or loops.
  • Move templates into versioned files so product teams can edit them without touching Python code.
  • Add evaluation tests for prompt variants so you can compare output quality across changes.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides