AutoGen Tutorial (TypeScript): adding tool use for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogenadding-tool-use-for-intermediate-developerstypescript

This tutorial shows you how to add tool use to an AutoGen TypeScript agent so it can call real functions instead of only replying with text. You need this when your agent has to fetch data, look up records, or trigger business logic without handing control back to your application on every turn.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project initialized with tsconfig.json
  • AutoGen packages:
    • @autogen/core
    • @autogen/openai
  • An OpenAI API key in OPENAI_API_KEY
  • A model that supports tool/function calling, such as gpt-4o-mini
  • Basic familiarity with async/await and TypeScript types

Step-by-Step

  1. Start by installing the packages and setting up a minimal TypeScript project. If you already have a project, just make sure the AutoGen packages are installed and your environment variable is set.
npm install @autogen/core @autogen/openai
npm install -D typescript tsx @types/node
export OPENAI_API_KEY="your-api-key"
  1. Define a real tool as a plain async function. In production, this is where you connect to a database, internal API, or policy service; for this tutorial, we’ll keep it deterministic and easy to test.
// tools.ts
export async function getPolicyStatus(policyId: string): Promise<string> {
  const mockDatabase: Record<string, string> = {
    "POL-1001": "Active",
    "POL-1002": "Pending renewal",
    "POL-1003": "Lapsed",
  };

  return mockDatabase[policyId] ?? "Policy not found";
}
  1. Create an agent that knows about the tool. The important part is registering the function with a name, description, and JSON schema so the model can decide when to call it.
// agent.ts
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";
import { getPolicyStatus } from "./tools";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

export const agent = new AssistantAgent({
  name: "support_agent",
  modelClient,
  tools: [
    {
      name: "get_policy_status",
      description: "Get the current status of an insurance policy by policy ID.",
      parameters: {
        type: "object",
        properties: {
          policyId: { type: "string", description: "The policy identifier" },
        },
        required: ["policyId"],
        additionalProperties: false,
      },
      execute: async ({ policyId }: { policyId: string }) => {
        return await getPolicyStatus(policyId);
      },
    },
  ],
});
  1. Send a user message and let the agent decide whether to use the tool. Keep the prompt specific; if you ask for a policy status directly, the model will usually call the function instead of inventing an answer.
// index.ts
import { agent } from "./agent";

async function main() {
  const result = await agent.run([
    {
      role: "user",
      content: "What is the status of policy POL-1002?",
    },
  ]);

  console.log(result);
}

main().catch(console.error);
  1. Run the script and inspect the output. If tool use is wired correctly, you should see the assistant call get_policy_status, then return a response based on the tool result rather than guessing.
npx tsx index.ts
  1. Tighten the tool contract before you move this into production. Add input validation, handle missing records explicitly, and make sure your tool returns structured data if downstream systems need more than plain text.
// tools.ts
export async function getPolicyStatus(policyId: string): Promise<{
  policyId: string;
  status: string;
}> {
  const mockDatabase: Record<string, string> = {
    "POL-1001": "Active",
    "POL-1002": "Pending renewal",
    "POL-1003": "Lapsed",
  };

  return {
    policyId,
    status: mockDatabase[policyId] ?? "Policy not found",
  };
}

Testing It

Test with at least three prompts:

  • One that clearly needs the tool, like “What is the status of policy POL-1001?”
  • One that should not use the tool, like “Explain what a lapsed policy means.”
  • One with an unknown ID, like “Check policy POL-9999.”

If everything is wired correctly, only the first and third prompts should trigger the function. For production readiness, also verify that bad inputs are rejected before they reach your backend logic and that failures return predictable messages.

Next Steps

  • Add multiple tools and let AutoGen choose between them based on descriptions and schemas.
  • Return structured JSON from tools and format it in a final assistant response.
  • Add logging around tool execution so you can trace which calls were made during each conversation turn.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides