LangChain Tutorial (TypeScript): streaming agent responses for beginners
This tutorial shows you how to build a LangChain agent in TypeScript that streams partial responses back to the caller as they’re generated. You need this when you want a better UX than waiting for a full answer, especially in chat apps, support tools, and internal copilots.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project initialized
- •
langchainand@langchain/openaiinstalled - •An OpenAI API key set as
OPENAI_API_KEY - •A terminal that can run
ts-node,tsx, or compiled Node output
Install the packages:
npm install langchain @langchain/openai
npm install -D typescript tsx @types/node
Set your API key:
export OPENAI_API_KEY="your-api-key-here"
Step-by-Step
- •Create a minimal TypeScript entry file and wire up the model.
We’ll use a chat model that supports streaming, then wrap it in an agent so we can ask it questions through tools later.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
streaming: true,
});
async function main() {
const response = await model.invoke([
new HumanMessage("Say hello in one short sentence."),
]);
console.log(response.content);
}
main().catch(console.error);
- •Add a tool so the agent has something useful to do.
Beginners often stream plain model output first, then add tools once the streaming loop is working.
import { DynamicTool } from "@langchain/core/tools";
const getTimeTool = new DynamicTool({
name: "get_time",
description: "Get the current server time in ISO format.",
func: async () => new Date().toISOString(),
});
- •Build an agent executor with the tool and stream its events.
The important part here isstreamEvents(), which lets you print tokens and tool activity as they happen.
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Use tools when needed."],
["human", "{input}"],
]);
async function runAgent() {
const agent = await createOpenAIFunctionsAgent({
llm: model,
tools: [getTimeTool],
prompt,
});
const executor = new AgentExecutor({
agent,
tools: [getTimeTool],
});
return executor;
}
- •Stream the response events to stdout.
This is where beginners usually get stuck: you do not wait for the final result first. You consume the async stream and render chunks as they arrive.
async function streamAnswer() {
const executor = await runAgent();
const stream = await executor.streamEvents(
{ input: "What time is it right now? Reply briefly." },
{ version: "v1" }
);
for await (const event of stream) {
if (event.event === "on_chat_model_stream") {
process.stdout.write(event.data.chunk.content ?? "");
}
if (event.event === "on_tool_start") {
console.log("\n[tool started]");
}
if (event.event === "on_tool_end") {
console.log("\n[tool finished]");
}
}
}
- •Run the script from a single entry point.
Keep it simple while learning: call one function, watch the output stream, then expand from there.
async function main() {
await streamAnswer();
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
Testing It
Run the file with tsx so you don’t need a separate build step:
npx tsx src/index.ts
If it works, you should see output appear incrementally instead of all at once. When the agent decides to use the tool, you’ll also see your [tool started] and [tool finished] markers.
If you only see a final answer with no intermediate chunks, check that streaming: true is set on ChatOpenAI. If tool calls never appear, make sure your prompt gives the agent a reason to use the tool.
Next Steps
- •Add more tools, such as database lookup or policy search tools for enterprise workflows
- •Switch from
streamEvents()to UI-friendly token rendering in React or Next.js - •Learn how to persist conversation state with memory or message history
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit