How to Fix 'output parsing error during development' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
output-parsing-error-during-developmentllamaindextypescript

When you see output parsing error during development in LlamaIndex TypeScript, it usually means the framework expected a structured response and got something it could not parse into the shape your program asked for. In practice, this shows up when using structured output, query engines, agents, or custom response schemas and the model returns extra text, malformed JSON, or the wrong field names.

The key point: this is usually not a model bug. It is almost always a mismatch between the output format you requested and the actual text the LLM produced.

The Most Common Cause

The #1 cause is asking the LLM for structured output but not enforcing a strict schema or parser-compatible prompt. In TypeScript, this often happens when you use asStructuredLLM, JSONQueryEngine, or an OutputParser-backed workflow and then let the model free-form its answer.

Here’s the broken pattern versus the fixed pattern:

BrokenFixed
Model returns prose plus JSONModel returns only valid JSON
No schema enforcementExplicit schema / parser
Prompt says “respond in JSON” looselyPrompt includes exact keys and no extra text
// BROKEN
import { OpenAI } from "@llamaindex/openai";
import { Settings } from "llamaindex";

Settings.llm = new OpenAI({
  model: "gpt-4o-mini",
});

const prompt = `
Extract customer details as JSON:
name, policyNumber, claimAmount
`;

const response = await Settings.llm.complete(prompt);
// Often returns:
// "Sure — here is the JSON:\n{ ... }"
// which breaks downstream parsing
console.log(response.text);
// FIXED
import { OpenAI } from "@llamaindex/openai";
import { z } from "zod";
import { Settings } from "llamaindex";

const CustomerSchema = z.object({
  name: z.string(),
  policyNumber: z.string(),
  claimAmount: z.number(),
});

Settings.llm = new OpenAI({
  model: "gpt-4o-mini",
});

const prompt = `
Return ONLY valid JSON matching this schema:
{
  "name": string,
  "policyNumber": string,
  "claimAmount": number
}
No markdown. No explanation.
`;

const response = await Settings.llm.complete(prompt);
const parsed = CustomerSchema.parse(JSON.parse(response.text));
console.log(parsed);

If you are using a higher-level LlamaIndex class like StructuredLLM, QueryEngine, or an agent workflow, make sure the schema is actually wired into that layer. Telling the model “please respond with JSON” is not enough.

Other Possible Causes

1) The model adds markdown fences or commentary

A common failure mode is output like:

```json
{ "name": "Ava" }

That will trigger parse failures in code expecting raw JSON.

```ts
// Bad prompt fragment
"Return your answer as JSON."
// Better prompt fragment
"Return ONLY raw JSON. Do not wrap in ```json fences. Do not add commentary."

2) Your parser expects numbers but gets strings

This happens when your schema says claimAmount: number, but the model emits "1250" as a string.

const data = {
  claimAmount: "1250", // wrong type for strict parsers
};

Fix by either tightening your prompt or coercing before validation:

const normalized = {
  ...data,
  claimAmount: Number(data.claimAmount),
};

3) Your tool output is being mixed with final answer text

In agent flows, LlamaIndex may expect tool-call-compatible output but receive a blended answer like:

I checked the policy database. Final answer: approved.

That can break parsers inside classes such as ReActAgent, FunctionAgent, or workflow steps that expect machine-readable outputs.

// Bad: final answer mixed with structured result
"Approved. {\"status\":\"approved\"}"

Use separate channels for tool results and final responses where supported, and keep tool outputs machine-readable.

4) Schema drift between prompt and code

You changed the code shape but forgot to update the prompt or downstream mapper.

// Code expects:
type ClaimResult = {
  decision: string;
  reasonCode: string;
};

// Prompt still asks for:
"approved": true,
"reason": "..."

This fails when your parser tries to map fields that no longer exist. Keep schema definitions centralized and reuse them in both prompt generation and validation.

How to Debug It

  1. Log the raw model output first

    • Don’t inspect only parsed objects.
    • Print response.text before parsing so you can see whether you got prose, markdown fences, malformed JSON, or wrong field names.
  2. Check whether parsing fails before your own code runs

    • If you see errors like:
      • Error: Could not parse output
      • OutputParserException
      • Failed to parse response
    • Then the issue is upstream of your business logic.
  3. Validate against a local schema manually

    • Take the raw text and run it through JSON.parse() and then Zod.
    • If JSON.parse() fails, your problem is formatting.
    • If Zod fails, your problem is shape/type mismatch.
  4. Reduce complexity

    • Remove tools, memory, retrieval, and multi-step prompts.
    • Test with a single plain completion call first.
    • If that works, reintroduce components one by one until parsing breaks again.

Prevention

  • Use one source of truth for schemas.

    • Define your Zod or TypeScript schema once and reuse it in prompts, validators, and post-processing.
  • Make prompts explicit about format constraints.

    • Say “return ONLY raw JSON” and specify exact keys.
    • Ban markdown fences unless your parser expects them.
  • Validate every structured response at runtime.

    • Even if you trust the model most of the time, production code should treat LLM output as untrusted input.

If you are seeing this error during development in LlamaIndex TypeScript, start by inspecting the raw completion text. In most cases, fixing prompt/schema alignment resolves it immediately; if not, one of your downstream parsers is stricter than the model output you’re feeding it.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides