LangChain Tutorial (TypeScript): adding authentication for intermediate developers
This tutorial shows how to add authentication to a LangChain TypeScript app so only approved users can call your chain or agent. You need this when you expose an AI workflow behind an API, a dashboard, or a Slack bot and want to block anonymous access before any model call happens.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •A LangChain TypeScript project
- •An OpenAI API key
- •An auth provider or token source, such as:
- •JWTs from your backend
- •API keys from your own app
- •Session cookies if you’re behind a web framework
- •Packages:
- •
langchain - •
@langchain/openai - •
zod - •
express - •
jsonwebtoken - •
dotenv - •
typescript - •
ts-nodeortsx
- •
Step-by-Step
- •Start with a basic LangChain chain and keep it separate from auth. The important pattern is to make the chain pure, then wrap it with authentication at the API boundary.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant for bank support staff."],
["human", "{question}"],
]);
export const supportChain = prompt.pipe(model);
- •Add token verification before calling the chain. In production, this is where you validate a JWT, check expiration, and attach the user identity to the request context.
import jwt from "jsonwebtoken";
export type AuthUser = {
sub: string;
email?: string;
role?: string;
};
const JWT_SECRET = process.env.JWT_SECRET || "dev-secret-change-me";
export function authenticateBearerToken(authHeader?: string): AuthUser {
if (!authHeader?.startsWith("Bearer ")) {
throw new Error("Missing Bearer token");
}
const token = authHeader.slice("Bearer ".length);
const payload = jwt.verify(token, JWT_SECRET) as AuthUser;
if (!payload.sub) {
throw new Error("Invalid token payload");
}
return payload;
}
- •Wrap the chain in an Express endpoint. This keeps auth outside LangChain internals and makes it easy to enforce per-route permissions before any LLM call runs.
import express from "express";
import { supportChain } from "./chain";
import { authenticateBearerToken } from "./auth";
const app = express();
app.use(express.json());
app.post("/support", async (req, res) => {
try {
const user = authenticateBearerToken(req.header("authorization"));
const question = String(req.body?.question ?? "");
if (!question.trim()) {
return res.status(400).json({ error: "question is required" });
}
const response = await supportChain.invoke({
question: `[user:${user.sub}] ${question}`,
});
res.json({ user: user.sub, answer: response.content });
} catch (error) {
const message = error instanceof Error ? error.message : "Unauthorized";
const status = message.includes("token") ? 401 : 500;
res.status(status).json({ error: message });
}
});
app.listen(3000, () => console.log("Listening on http://localhost:3000"));
- •If you need role-based access control, enforce it before invoking the chain. This is common in insurance workflows where underwriters can see more than customer service agents.
import { authenticateBearerToken } from "./auth";
type Role = "agent" | "underwriter" | "admin";
function requireRole(userRole: string | undefined, allowed: Role[]) {
if (!userRole || !allowed.includes(userRole as Role)) {
throw new Error("Forbidden");
}
}
async function handleSensitiveRequest(authHeader?: string) {
const user = authenticateBearerToken(authHeader);
requireRole(user.role, ["underwriter", "admin"]);
return {
ok: true,
userId: user.sub,
role: user.role,
};
}
handleSensitiveRequest(process.env.AUTH_HEADER).then(console.log);
- •If you want to pass identity into downstream prompts or tools, do it explicitly. Don’t rely on hidden globals; make authorization data part of the input contract so audits are easier later.
import { z } from "zod";
const InputSchema = z.object({
userId: z.string(),
role: z.string().optional(),
question: z.string().min(1),
});
export async function buildSecureInput(raw: unknown) {
const input = InputSchema.parse(raw);
return {
question: `User ${input.userId} (${input.role ?? "unknown"}) asked: ${input.question}`,
userId: input.userId,
role: input.role,
};
}
Testing It
Run the server and send one request without an Authorization header. You should get a 401 with Missing Bearer token, and no OpenAI call should happen.
Then generate a valid JWT with the same JWT_SECRET and send it as Authorization: Bearer <token>. The request should return a model answer plus the authenticated user field.
Try a malformed token and confirm it fails before reaching LangChain. If you added role checks, test both allowed and forbidden roles so you know your policy is enforced at the edge.
A quick curl test looks like this:
curl -X POST http://localhost:3000/support \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_HERE" \
-d '{"question":"What is the claim status for policy A123?"}'
Next Steps
- •Move auth into middleware so every route gets consistent identity handling.
- •Add audit logging for
user.sub, route name, latency, and model usage. - •Replace shared secrets with OAuth2 or OIDC tokens if this will face real users outside your internal network.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit