LangChain Tutorial (TypeScript): parsing structured output for beginners
This tutorial shows you how to take free-form LLM text and turn it into typed, structured data in TypeScript using LangChain. You need this when your app must reliably extract fields like names, dates, priorities, or action items instead of guessing from raw text.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeor a build step - •An OpenAI API key
- •These packages:
- •
langchain - •
@langchain/openai - •
zod - •
dotenv
- •
Install them like this:
npm install langchain @langchain/openai zod dotenv
npm install -D typescript ts-node @types/node
Set your API key in .env:
OPENAI_API_KEY=your_key_here
Step-by-Step
- •Start by defining the shape you want back from the model. Zod is the cleanest way to do this in LangChain because it gives you both runtime validation and TypeScript inference.
import "dotenv/config";
import { z } from "zod";
const TicketSchema = z.object({
customerName: z.string(),
issueType: z.enum(["billing", "technical", "account", "other"]),
priority: z.enum(["low", "medium", "high"]),
summary: z.string(),
});
type Ticket = z.infer<typeof TicketSchema>;
- •Create a parser from that schema. LangChain will use this parser to tell the model what format to return and then validate the output before your app touches it.
import { StructuredOutputParser } from "@langchain/core/output_parsers";
const parser = StructuredOutputParser.fromZodSchema(TicketSchema);
console.log(parser.getFormatInstructions());
- •Build a prompt that includes the format instructions. This is the part beginners usually skip, and it is why their models return messy prose instead of usable JSON.
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You extract support ticket data from user messages. Return only structured output.",
],
["human", "{input}\n\n{format_instructions}"],
]);
const formattedPrompt = await prompt.formatMessages({
input: "Hi, I'm Sarah Chen. My card was charged twice and I need this fixed today.",
format_instructions: parser.getFormatInstructions(),
});
console.log(formattedPrompt);
- •Send the prompt to a chat model and parse the response. Use a real model class from
@langchain/openai, then pass the raw response through the parser so you get validated data instead of unchecked text.
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const result = await model.invoke(formattedPrompt);
const parsed: Ticket = await parser.parse(result.content as string);
console.log(parsed);
- •Put it all together in one runnable file. This version is what you should actually run while learning, because it shows the full request-response-parse loop end to end.
import "dotenv/config";
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
const TicketSchema = z.object({
customerName: z.string(),
issueType: z.enum(["billing", "technical", "account", "other"]),
priority: z.enum(["low", "medium", "high"]),
summary: z.string(),
});
async function main() {
const parser = StructuredOutputParser.fromZodSchema(TicketSchema);
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You extract support ticket data from user messages."],
["human", "{input}\n\n{format_instructions}"],
]);
const messages = await prompt.formatMessages({
input: "Hi, I'm Sarah Chen. My card was charged twice and I need this fixed today.",
format_instructions: parser.getFormatInstructions(),
});
const response = await model.invoke(messages);
const parsed = await parser.parse(response.content as string);
console.log(parsed);
}
main().catch(console.error);
Testing It
Run the file with npx ts-node your-file.ts. You should see an object with customerName, issueType, priority, and summary, not a blob of markdown or plain English.
If parsing fails, check two things first: whether your prompt includes format_instructions, and whether you are using a low temperature setting. For structured extraction, randomness hurts reliability.
Also test with inputs that are intentionally ambiguous or noisy. If the schema is too strict, Zod will reject bad values early, which is exactly what you want in production.
Next Steps
- •Learn
withStructuredOutput()for newer LangChain patterns that reduce manual parsing work. - •Add retry logic for parse failures when the model returns malformed output.
- •Extend the schema with nested objects and arrays for more realistic business documents like claims, invoices, or KYC notes.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit