LangChain Tutorial (TypeScript): adding authentication for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-authentication-for-advanced-developerstypescript

This tutorial shows how to add authentication to a LangChain TypeScript app so only approved users can call your LLM workflows. You need this when your chain is exposed through an API, embedded in a customer portal, or used inside an internal tool where access control and auditability matter.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • npm or pnpm
  • A LangChain-compatible LLM API key
  • A small auth layer:
    • either JWTs from your identity provider
    • or an API key issued by your backend
  • Packages:
    • langchain
    • @langchain/openai
    • express
    • jsonwebtoken
    • zod
    • dotenv
    • ts-node or a compiled TypeScript setup

Step-by-Step

  1. Start with a minimal LangChain runnable that accepts user input. Keep the chain separate from auth so you can test business logic without touching security code.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
  model: "gpt-4o-mini",
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise assistant."],
  ["user", "{question}"],
]);

const chain = prompt.pipe(model).pipe(StringOutputParser());

async function main() {
  const answer = await chain.invoke({ question: "What is authentication?" });
  console.log(answer);
}

main().catch(console.error);
  1. Add a typed auth context and validate it before the chain runs. In production, this is where you decode a JWT, check expiry, and attach user claims to the request.
import jwt from "jsonwebtoken";
import { z } from "zod";

const AuthClaimsSchema = z.object({
  sub: z.string(),
  email: z.string().email(),
  role: z.enum(["user", "admin"]),
});

type AuthClaims = z.infer<typeof AuthClaimsSchema>;

export function verifyBearerToken(authHeader: string | undefined): AuthClaims {
  if (!authHeader?.startsWith("Bearer ")) {
    throw new Error("Missing bearer token");
  }

  const token = authHeader.slice(7);
  const decoded = jwt.verify(token, process.env.JWT_SECRET!) as unknown;
  return AuthClaimsSchema.parse(decoded);
}
  1. Wrap the chain in an HTTP endpoint and reject unauthorized requests before any model call happens. This keeps expensive inference behind the auth gate and makes your logs easier to reason about.
import express from "express";
import { verifyBearerToken } from "./auth.js";

const app = express();
app.use(express.json());

app.post("/ask", async (req, res) => {
  try {
    const user = verifyBearerToken(req.header("authorization"));
    const question = String(req.body?.question ?? "");

    if (!question.trim()) {
      return res.status(400).json({ error: "question is required" });
    }

    const answer = await chain.invoke({ question });
    return res.json({ user: user.sub, answer });
  } catch (err) {
    return res.status(401).json({ error: "unauthorized" });
  }
});

app.listen(3000, () => console.log("Listening on http://localhost:3000"));
  1. Add role-based access control inside the request handler when certain prompts or tools should be restricted. This is common in banking and insurance workflows where admins can query sensitive operational data but standard users cannot.
app.post("/admin/ask", async (req, res) => {
  try {
    const user = verifyBearerToken(req.header("authorization"));

    if (user.role !== "admin") {
      return res.status(403).json({ error: "forbidden" });
    }

    const question = String(req.body?.question ?? "");
    const answer = await chain.invoke({
      question: `[ADMIN REQUEST by ${user.email}] ${question}`,
    });

    return res.json({ answer });
  } catch (err) {
    return res.status(401).json({ error: "unauthorized" });
  }
});
  1. If you want the chain itself to be auth-aware, pass the authenticated identity through the runnable input. That lets you personalize responses, enforce tenant boundaries, or route to different tools based on the caller.
import { RunnableLambda } from "@langchain/core/runnables";

const secureChain = RunnableLambda.from(async (input: {
  question: string;
  userId: string;
}) => {
  const response = await chain.invoke({
    question: `User ${input.userId} asks: ${input.question}`,
  });

  return response;
});

async function runSecureExample() {
  const output = await secureChain.invoke({
    userId: "user_123",
    question: "Summarize my policy status",
  });

  console.log(output);
}

Testing It

Run the server with valid environment variables and send one request with a valid JWT and one without it. The authenticated request should return a model response, while the unauthenticated request should get a 401.

Use a token with role=user against /admin/ask; it should fail with 403. Then retry with role=admin and confirm the same endpoint returns a normal answer.

Also check that no model call happens when auth fails. In production, that means your logs should show rejection before any OpenAI request is made.

Next Steps

  • Add tenant isolation by including tenantId in your JWT claims and filtering every tool call with it.
  • Move auth verification into Express middleware so every route gets consistent enforcement.
  • Add audit logging for prompt text, caller identity, and model output metadata.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides