LangGraph Tutorial (TypeScript): parsing structured output for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphparsing-structured-output-for-beginnerstypescript

This tutorial shows how to make a LangGraph workflow return structured JSON instead of messy free-text. You need this when an LLM must produce data your app can validate, store, or pass into another service without brittle string parsing.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project with ts-node or a build step
  • An OpenAI API key
  • These packages:
    • @langchain/langgraph
    • @langchain/openai
    • zod
    • dotenv
  • A .env file with:
    • OPENAI_API_KEY=...

Step-by-Step

  1. Start by installing the dependencies and setting up environment variables. This example uses OpenAI models through LangChain, and Zod to define the output shape we want from the graph.
npm install @langchain/langgraph @langchain/openai zod dotenv
npm install -D typescript ts-node @types/node
  1. Define the schema for the structured output first. This is the contract your model must satisfy, and it keeps downstream code simple because you can work with typed fields instead of parsing text.
import "dotenv/config";
import { z } from "zod";

export const SupportTicketSchema = z.object({
  category: z.enum(["billing", "technical", "account", "other"]),
  priority: z.enum(["low", "medium", "high"]),
  summary: z.string(),
});

export type SupportTicket = z.infer<typeof SupportTicketSchema>;
  1. Create a graph node that asks the model to classify a customer message and return JSON matching the schema. The important part here is withStructuredOutput, which tells the model wrapper to parse and validate the response for you.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { SupportTicketSchema, type SupportTicket } from "./schema";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const GraphState = Annotation.Root({
  message: Annotation<string>(),
  ticket: Annotation<SupportTicket | null>(),
});

async function classifyMessage(state: typeof GraphState.State) {
  const structuredLLM = llm.withStructuredOutput(SupportTicketSchema);
  const ticket = await structuredLLM.invoke(
    `Classify this support request:\n${state.message}`
  );

  return { ticket };
}
  1. Wire the node into a minimal LangGraph workflow. This graph takes one input message, runs classification once, and returns a validated object in state.
const graph = new StateGraph(GraphState)
  .addNode("classify", classifyMessage)
  .addEdge(START, "classify")
  .addEdge("classify", END)
  .compile();

async function main() {
  const result = await graph.invoke({
    message: "My card was charged twice and I need this fixed today.",
    ticket: null,
  });

  console.log(result.ticket);
}

main().catch(console.error);
  1. If you want stronger validation, keep Zod strict and fail fast on bad outputs. That way you catch model mistakes immediately instead of letting malformed data flow into your database or queue.
import { z } from "zod";

const StrictSupportTicketSchema = z.object({
  category: z.enum(["billing", "technical", "account", "other"]),
  priority: z.enum(["low", "medium", "high"]),
  summary: z.string().min(10),
}).strict();

Testing It

Run the script with npx ts-node your-file.ts after setting OPENAI_API_KEY. You should see an object like { category: 'billing', priority: 'high', summary: '...' } printed to the console.

If the model returns something invalid, Zod will throw before your app uses it. That is exactly what you want in production workflows where structured output feeds routing, storage, or human review queues.

Try a few different inputs:

  • billing complaints
  • login issues
  • vague requests like “help me”

You should see different categories and priorities while keeping the same object shape every time.

Next Steps

  • Add a second node that routes tickets based on category
  • Use graph.stream() to inspect intermediate state during debugging
  • Replace single-shot classification with a multi-step extraction flow for invoices or claims

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides