AutoGen Tutorial (TypeScript): building prompt templates for advanced developers
This tutorial shows you how to build reusable prompt templates in AutoGen with TypeScript, then wire them into an agent workflow that can handle structured, repeatable tasks. You need this when plain string prompts become unmaintainable, especially once you start shipping multi-step agent systems with role-specific instructions, variables, and output constraints.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeor a build step - •
@autogenai/autogeninstalled - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with AutoGen agents and message passing
- •A terminal that can run TypeScript files
Step-by-Step
- •Start by defining a template contract instead of hardcoding prompts inline. The pattern here is simple: keep the template as data, then render it with typed inputs before sending it to the model.
type PromptVars = {
domain: string;
audience: string;
goal: string;
};
function renderPrompt(template: string, vars: PromptVars): string {
return template.replace(/\{(\w+)\}/g, (_, key: keyof PromptVars) => {
const value = vars[key];
if (!value) throw new Error(`Missing template variable: ${key}`);
return value;
});
}
const systemTemplate = `
You are a senior {domain} assistant.
Write for {audience}.
Primary goal: {goal}.
`;
const systemPrompt = renderPrompt(systemTemplate, {
domain: "insurance",
audience: "underwriters",
goal: "produce concise risk summaries",
});
console.log(systemPrompt.trim());
- •Create your agent using a rendered system prompt. In production, this is where you encode tone, format, refusal behavior, and output structure once instead of repeating it across every request.
import { AssistantAgent } from "@autogenai/autogen";
const assistant = new AssistantAgent({
name: "prompt-template-agent",
modelClientOptions: {
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY!,
},
systemMessage: systemPrompt.trim(),
});
- •Add a second template for task-specific instructions and merge it into each request. This gives you one stable agent prompt plus one variable layer for each job, which is the pattern you want for advanced developer workflows.
type TaskVars = {
policyId: string;
claimSummary: string;
};
const taskTemplate = `
Analyze policy {policyId}.
Use the following claim summary:
{claimSummary}
Return:
1. Key risk factors
2. Missing information
3. Recommended next action
`;
function renderTaskPrompt(vars: TaskVars): string {
return taskTemplate
.replace("{policyId}", vars.policyId)
.replace("{claimSummary}", vars.claimSummary);
}
const taskPrompt = renderTaskPrompt({
policyId: "POL-44821",
claimSummary:
"Customer reported water damage after a burst pipe. No prior claims in last 24 months.",
});
- •Send the rendered prompt through AutoGen and keep the response handling explicit. You want the message shape visible in code so downstream parsing, logging, and retries stay predictable.
async function main() {
const result = await assistant.run(taskPrompt);
console.log("=== MODEL OUTPUT ===");
console.log(result.messages.at(-1)?.content);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •If you need stronger control, move from plain text templates to structured output templates. For advanced teams, this is usually where you define a JSON schema in the prompt and validate the response before it enters your pipeline.
const structuredTemplate = `
You are generating JSON only.
Input:
{input}
Output schema:
{
"summary": "string",
"riskLevel": "low|medium|high",
"missingFields": ["string"]
}
`;
function renderStructured(input: string): string {
return structuredTemplate.replace("{input}", input);
}
const structuredPrompt = renderStructured(
"Customer has repeated claims related to roof damage over three years."
);
console.log(structuredPrompt);
Testing It
Run the file with OPENAI_API_KEY set and verify that the rendered prompts contain no unreplaced placeholders like {policyId} or {claimSummary}. Then check that the agent returns content consistent with your system message, especially tone and output shape.
If you use structured prompts, validate the response against your expected JSON contract before trusting it downstream. In practice, I also log both the raw rendered prompt and final model output during development so prompt regressions are easy to spot.
Next Steps
- •Add Zod validation for structured outputs before passing results to your business logic
- •Build a prompt registry so teams can version templates by use case and environment
- •Extend this pattern into multi-agent workflows where each agent gets its own template contract
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit