AutoGen Tutorial (TypeScript): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogenadding-tool-use-for-advanced-developerstypescript

This tutorial shows how to add tool use to an AutoGen TypeScript agent so it can call real functions instead of only chatting. You need this when your agent must fetch data, validate inputs, query internal systems, or take deterministic actions before replying.

What You'll Need

  • Node.js 18+
  • A TypeScript project with typescript and ts-node or a build step
  • AutoGen for TypeScript installed:
    • npm install @autogenai/autogen
  • An OpenAI-compatible API key in your environment:
    • OPENAI_API_KEY=...
  • A model that supports tool/function calling
  • Basic familiarity with AutoGen agents and chat loops

Step-by-Step

  1. Start with a minimal agent setup and define the tool you want the model to call. Keep the tool small and deterministic; for production, tools should do one thing and return structured data.
import { AssistantAgent, OpenAIChatCompletionClient, type Tool } from "@autogenai/autogen";

const client = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const getPolicyStatus: Tool = {
  name: "get_policy_status",
  description: "Fetch the current status of an insurance policy by policy number.",
  parameters: {
    type: "object",
    properties: {
      policyNumber: { type: "string", description: "Policy number like POL12345" },
    },
    required: ["policyNumber"],
    additionalProperties: false,
  },
  execute: async ({ policyNumber }: { policyNumber: string }) => {
    return {
      policyNumber,
      status: "active",
      premiumDueDate: "2026-05-01",
    };
  },
};

const agent = new AssistantAgent({
  name: "policy_assistant",
  modelClient: client,
});
  1. Register the tool on the agent. This is the part that makes the function visible to the model during planning, so it can decide when to call it.
agent.registerTool(getPolicyStatus);

async function main() {
  const result = await agent.run([
    {
      role: "user",
      content: "Check policy POL12345 and tell me whether it's active.",
    },
  ]);

  console.log(result.messages);
}

main().catch(console.error);
  1. Add a second tool for a different action so you can see how AutoGen handles multiple capabilities. In real systems, this is where you separate read-only lookups from write operations.
const calculatePremiumEstimate: Tool = {
  name: "calculate_premium_estimate",
  description: "Estimate a monthly premium based on age and coverage amount.",
  parameters: {
    type: "object",
    properties: {
      age: { type: "number" },
      coverageAmount: { type: "number" },
    },
    required: ["age", "coverageAmount"],
    additionalProperties: false,
  },
  execute: async ({ age, coverageAmount }: { age: number; coverageAmount: number }) => {
    const base = coverageAmount / 1000;
    const ageFactor = age > 50 ? 1.4 : age > 30 ? 1.15 : 1;
    return {
      monthlyPremiumEstimate: Number((base * ageFactor).toFixed(2)),
      currency: "USD",
    };
  },
};

agent.registerTool(calculatePremiumEstimate);
  1. Force the assistant to use tools by giving it a task that requires computation or lookup. The key detail is that you should expect a normal assistant response after the tool result comes back, not just raw JSON.
async function runExamples() {
  const lookup = await agent.run([
    { role: "user", content: "Is policy POL12345 active?" },
  ]);

  console.log("Lookup output:");
  console.log(JSON.stringify(lookup.messages, null, 2));

  const estimate = await agent.run([
    { role: "user", content: "Estimate a premium for a 42-year-old with $250000 coverage." },
  ]);

  console.log("Estimate output:");
  console.log(JSON.stringify(estimate.messages, null, 2));
}

runExamples().catch(console.error);
  1. If you need stricter control, validate inputs inside the tool before returning anything useful. That keeps bad arguments from turning into bad downstream calls.
const getClaimStatusWithValidation = {
  name: "get_claim_status",
  description: "Fetch claim status by claim ID.",
  parameters: {
    type: "object",
    properties: {
      claimId: { type: "string" },
    },
    required: ["claimId"],
    additionalProperties: false,
  },
  execute: async ({ claimId }: { claimId?: string }) => {
    if (!claimId || !/^CLM\d+$/.test(claimId)) {
      throw new Error("Invalid claimId format. Expected something like CLM1001.");
    }

    return {
      claimId,
      statusTextForCustomerServiceUseOnlyNotFinalDecisionYetNotForAutomation:
        "under_review",
    };
  },
};

agent.registerTool(getClaimStatusWithValidation);

Testing It

Run the file with ts-node or compile it with tsc and execute the output with Node.js. Use prompts that clearly require external data or calculation so you can confirm the model actually calls the tool instead of hallucinating an answer.

Check logs or returned messages for tool call entries followed by a final assistant response that uses the tool output. If the model answers without calling your function, tighten the tool description or make the user prompt more specific.

For production testing, verify failure paths too:

  • missing required fields
  • invalid formats
  • timeouts from downstream systems
  • repeated calls to the same tool

Next Steps

  • Add memory and conversation state so tool results persist across turns
  • Wrap tools around real APIs like policy admin systems or claims services
  • Add guardrails for approval flows before any write-action tool runs

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides