LangGraph Tutorial (TypeScript): parsing structured output for intermediate developers
This tutorial shows how to take raw LLM text in a LangGraph TypeScript workflow and turn it into typed structured output you can actually use downstream. You need this when your agent must return predictable JSON for validation, routing, persistence, or API calls instead of free-form text.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeortsx - •
@langchain/openai - •
@langchain/core - •
@langgraph/langgraph - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangGraph state, nodes, and edges
Step-by-Step
- •Start with a typed state that carries both the raw model output and the parsed result. The important part is to keep the original text around so you can debug parsing failures without losing context.
import { z } from "zod";
export const TicketSchema = z.object({
category: z.enum(["billing", "technical", "account"]),
priority: z.enum(["low", "medium", "high"]),
summary: z.string(),
});
export type Ticket = z.infer<typeof TicketSchema>;
export interface GraphState {
input: string;
rawOutput?: string;
ticket?: Ticket;
}
- •Create a model node that asks for strict JSON and stores the raw response. For intermediate developers, the key detail is that prompt discipline still matters even when you parse later.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
export async function generateRawTicket(state: GraphState): Promise<Partial<GraphState>> {
const messages = [
new SystemMessage(
'Return ONLY valid JSON matching this schema: {"category":"billing|technical|account","priority":"low|medium|high","summary":"string"}'
),
new HumanMessage(state.input),
];
const response = await model.invoke(messages);
return { rawOutput: response.content.toString() };
}
- •Parse and validate the model output with Zod inside a dedicated node. This is the part that makes the workflow production-friendly because malformed output becomes a controlled failure instead of silent bad data.
export function parseTicket(state: GraphState): Partial<GraphState> {
if (!state.rawOutput) {
throw new Error("Missing rawOutput");
}
const parsedJson = JSON.parse(state.rawOutput);
const ticket = TicketSchema.parse(parsedJson);
return { ticket };
}
- •Build the graph and connect the nodes in order. LangGraph keeps this simple: one node generates text, the next node parses it, and the final state contains both forms.
import { StateGraph, START, END } from "@langgraph/langgraph";
const graph = new StateGraph<GraphState>()
.addNode("generateRawTicket", generateRawTicket)
.addNode("parseTicket", parseTicket)
.addEdge(START, "generateRawTicket")
.addEdge("generateRawTicket", "parseTicket")
.addEdge("parseTicket", END);
export const app = graph.compile();
- •Run the graph with an input that looks like real support data. If you want reliable structured output, test with messy human language instead of clean demo prompts.
async function main() {
const result = await app.invoke({
input:
"Customer says their invoice shows two charges for January and wants this fixed urgently.",
});
console.log("RAW:", result.rawOutput);
console.log("PARSED:", result.ticket);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •Add a guardrail for invalid JSON so your graph fails loudly with useful errors. In production systems, this is where you decide whether to retry, route to a fallback node, or send the case to human review.
export function safeParseTicket(state: GraphState): Partial<GraphState> {
if (!state.rawOutput) throw new Error("Missing rawOutput");
try {
const parsedJson = JSON.parse(state.rawOutput);
return { ticket: TicketSchema.parse(parsedJson) };
} catch (error) {
throw new Error(
`Failed to parse structured output: ${(error as Error).message}\nRaw output: ${state.rawOutput}`
);
}
}
Testing It
Run the script with OPENAI_API_KEY set and confirm you get two outputs: the raw JSON string and the validated ticket object. If parsing fails, inspect whether the model returned markdown fences, extra commentary, or invalid enum values.
A good test is to feed inputs that are ambiguous or incomplete and see whether your schema catches bad data early. You should also verify that downstream code only reads from ticket, not rawOutput, unless it is explicitly handling errors or logs.
If you want stronger confidence, add unit tests around safeParseTicket using fixed strings rather than live model calls. That gives you deterministic coverage for malformed JSON, missing fields, and schema violations.
Next Steps
- •Add retries with a repair prompt when JSON parsing fails
- •Replace manual JSON parsing with provider-native structured output where available
- •Add conditional edges in LangGraph to route invalid parses to human review
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit