How to Fix 'tool calling failure' in AutoGen (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
tool-calling-failureautogentypescript

Opening

tool calling failure in AutoGen TypeScript usually means the model tried to invoke a tool, but the runtime could not execute it or could not parse the tool call correctly. You’ll typically see it when using AssistantAgent, UserProxyAgent, or a custom tool registration flow with OpenAI-compatible models.

In practice, this error shows up when the tool schema, model config, or function signature does not line up with what AutoGen expects.

The Most Common Cause

The #1 cause is a mismatch between the tool definition and the actual function signature. In AutoGen TS, your tool must accept the arguments exactly as declared in its schema, and it must return something serializable.

A common failure looks like this:

Broken patternFixed pattern
Tool expects a string but schema sends an objectTool accepts the exact object shape defined in the schema
Returns a non-serializable valueReturns plain JSON-safe data

Broken code

import { AssistantAgent } from "@autogen/agent";
import { z } from "zod";

const agent = new AssistantAgent({
  name: "assistant",
  modelClient,
});

agent.registerTool({
  name: "lookupPolicy",
  description: "Look up an insurance policy",
  parameters: z.object({
    policyId: z.string(),
  }),
  execute: async (policyId: string) => {
    // ❌ AutoGen passes an object, not a raw string
    return await db.policies.findById(policyId);
  },
});

This often surfaces as something like:

  • tool calling failure
  • Failed to execute tool lookupPolicy
  • Invalid tool arguments
  • Cannot read properties of undefined

Fixed code

import { AssistantAgent } from "@autogen/agent";
import { z } from "zod";

const agent = new AssistantAgent({
  name: "assistant",
  modelClient,
});

agent.registerTool({
  name: "lookupPolicy",
  description: "Look up an insurance policy",
  parameters: z.object({
    policyId: z.string(),
  }),
  execute: async ({ policyId }: { policyId: string }) => {
    const policy = await db.policies.findById(policyId);

    // ✅ Return plain JSON-safe data
    return {
      policyId,
      status: policy?.status ?? "not_found",
      premium: policy?.premium ?? null,
    };
  },
});

The important part is that the execute input matches the schema shape. If your tool takes { policyId }, do not type it as string.

Other Possible Causes

1. The model you picked does not support tool calls properly

Some OpenAI-compatible endpoints claim support but fail on structured tool invocation. You’ll see errors like:

  • tool calling failure
  • Model returned invalid tool call
  • Function call arguments missing
const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

If you’re using a proxy or local gateway, verify it actually supports function/tool calling end to end.

2. Your tool returns unsupported data

AutoGen expects JSON-serializable output. Returning class instances, circular objects, buffers, or database records with methods can break execution.

// Bad
return someMongooseDoc;

// Good
return someMongooseDoc.toObject();

Or:

// Bad
return new Date();

// Good
return new Date().toISOString();

3. Your schema is too strict for what the model sends

If your Zod schema rejects optional fields or enum values the model slightly varies on, the tool call fails before execution.

// Too strict
parameters: z.object({
  status: z.enum(["active", "inactive"]),
});

// Safer if the model may vary
parameters: z.object({
  status: z.string(),
});

You can validate more strictly inside the function after logging raw input.

4. You registered the tool on one agent but expected another agent to use it

This happens when wiring multi-agent flows. The assistant tries to call a tool that was never exposed to that specific agent instance.

const assistant = new AssistantAgent({ name: "assistant", modelClient });
const reviewer = new AssistantAgent({ name: "reviewer", modelClient });

// Tool registered here
assistant.registerTool({ ... });

// But conversation runs through reviewer
await reviewer.run("Check policy coverage");

Tools are attached to the agent that performs generation. Register them on the right agent.

How to Debug It

  1. Log the raw tool call arguments

    • Print what AutoGen receives before execution.
    • If arguments look like { arguments: "{...}" }, you may be parsing one layer too early or too late.
  2. Wrap every tool in try/catch

    • Surface validation and runtime errors clearly.
    • Return explicit messages instead of letting exceptions collapse into tool calling failure.
execute: async (input) => {
  try {
    console.log("tool input:", input);
    return await doWork(input);
  } catch (err) {
    console.error("lookupPolicy failed:", err);
    throw err;
  }
}
  1. Validate against your schema manually

    • Run sample payloads through Zod before wiring them into AutoGen.
    • If Zod rejects it locally, the agent will reject it too.
  2. Check model capability and version

    • Confirm your provider supports tools.
    • Test with a known-good model like GPT-4-class models before blaming AutoGen.

Prevention

  • Keep tool inputs simple:

    • Use flat JSON objects.
    • Avoid nested unions unless you need them.
  • Make outputs boring:

    • Return strings, numbers, booleans, arrays, and plain objects.
    • Convert dates and ORM records before returning them.
  • Add contract tests for tools:

    • Test each registered tool with representative payloads.
    • Fail CI if schema and implementation drift apart.

If you want fewer production surprises, treat every AutoGen tool like an API boundary. Define the contract clearly, validate aggressively, and don’t assume the model will send exactly what you hoped for.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides