LangGraph Tutorial (TypeScript): parsing structured output for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langgraphparsing-structured-output-for-advanced-developerstypescript

This tutorial shows how to build a LangGraph workflow in TypeScript that forces an LLM to return structured JSON, validates it at the edge, and routes downstream logic based on the parsed result. You need this when your agent output must be machine-safe for billing, claims, KYC, or any workflow where free-form text is not acceptable.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • npm or pnpm
  • OpenAI API key
  • Packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • zod
    • dotenv

Step-by-Step

  1. Start by installing the dependencies and setting up your environment variables. Keep the model choice simple here; the point is to get reliable structured output, not to optimize prompts yet.
npm install @langchain/langgraph @langchain/openai @langchain/core zod dotenv
export OPENAI_API_KEY="your-key"
  1. Define a schema for the output you want from the model. This is the contract your graph will enforce, and it should match whatever downstream service expects.
import "dotenv/config";
import { z } from "zod";

export const TicketSchema = z.object({
  category: z.enum(["billing", "claims", "kyc", "technical"]),
  priority: z.enum(["low", "medium", "high"]),
  summary: z.string().min(10),
  needsHumanReview: z.boolean(),
});

export type Ticket = z.infer<typeof TicketSchema>;
  1. Build a node that asks the model to classify text into that schema. The important part is withStructuredOutput, which makes parsing part of the model call instead of a fragile post-processing step.
import { ChatOpenAI } from "@langchain/openai";
import { TicketSchema, type Ticket } from "./schema";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

export async function parseTicket(input: string): Promise<Ticket> {
  const structured = llm.withStructuredOutput(TicketSchema);
  return structured.invoke(
    `Classify this customer message into the ticket schema:\n\n${input}`
  );
}
  1. Wire that parser into a LangGraph state machine. This graph takes raw text, parses it once, and then routes based on whether human review is needed.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { parseTicket } from "./parseTicket";
import type { Ticket } from "./schema";

const GraphState = Annotation.Root({
  input: Annotation<string>(),
  ticket: Annotation<Ticket | null>(),
});

async function classifyNode(state: typeof GraphState.State) {
  const ticket = await parseTicket(state.input);
  return { ticket };
}

function routeNode(state: typeof GraphState.State) {
  return state.ticket?.needsHumanReview ? "human" : "auto";
}

const graph = new StateGraph(GraphState)
  .addNode("classify", classifyNode)
  .addNode("human", async (state) => ({
    ticket: { ...state.ticket!, needsHumanReview: true },
  }))
  .addNode("auto", async (state) => ({ ticket: state.ticket }))
  .addEdge(START, "classify")
  .addConditionalEdges("classify", routeNode, {
    human: "human",
    auto: "auto",
  })
  .addEdge("human", END)
  .addEdge("auto", END)
  .compile();

export { graph };
  1. Run the graph with a real input and inspect the parsed object. In production, this is where you’d branch into CRM updates, case creation, or an agent handoff.
import { graph } from "./graph";

async function main() {
  const result = await graph.invoke({
    input:
      "I was charged twice for my policy renewal and need this fixed urgently.",
    ticket: null,
  });

  console.log(JSON.stringify(result.ticket, null, 2));
}

main().catch(console.error);
  1. Add a hard validation layer before trusting anything downstream. Even with structured output enabled, I still validate at the boundary because that keeps failures explicit when prompts drift or model behavior changes.
import { TicketSchema } from "./schema";
import { graph } from "./graph";

async function main() {
  const result = await graph.invoke({
    input: "My claim was denied but I have supporting documents.",
    ticket: null,
  });

  const parsed = TicketSchema.parse(result.ticket);
  console.log(`Category: ${parsed.category}`);
}

main().catch(console.error);

Testing It

Run the script with two or three very different inputs and confirm the output always matches the schema shape. You want to see enum values only from the allowed set, no missing fields, and a boolean for needsHumanReview.

Test one billing complaint, one KYC issue, and one technical support request. If you get malformed data back, check that you are using withStructuredOutput and not just prompting for JSON in plain text.

For integration tests, assert against TicketSchema.safeParse(result.ticket) instead of comparing raw strings. That gives you stable tests even when wording changes but structure stays correct.

Next Steps

  • Add conditional edges for escalation by category instead of just human review
  • Replace the single-node parser with a multi-step extraction flow using tool calls
  • Persist parsed tickets in Postgres or DynamoDB with an idempotency key

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides