LangChain Tutorial (TypeScript): building conditional routing for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-conditional-routing-for-intermediate-developerstypescript

This tutorial shows you how to build a conditional router in LangChain TypeScript that sends each user request to the right chain based on intent. You need this when one prompt should not hit one generic chain: for example, support questions should go to a helpdesk flow, billing questions should go to a finance flow, and everything else should fall back to a general assistant.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or tsx
  • langchain installed
  • @langchain/openai installed
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with ChatPromptTemplate, RunnableLambda, and invoke()

Step-by-Step

  1. Start by installing the packages and setting your environment variable. I’m using OpenAI models here because the routing logic is easier to follow with standard LangChain primitives.
npm install langchain @langchain/openai
export OPENAI_API_KEY="your-key-here"
  1. Define three destination chains: support, billing, and fallback. Each one is just a prompt plus an LLM call, which keeps the routing layer clean and easy to test.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const supportPrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a support agent. Answer concisely and focus on troubleshooting."],
  ["human", "{input}"],
]);

const billingPrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a billing agent. Handle invoices, charges, refunds, and payment issues."],
  ["human", "{input}"],
]);

const generalPrompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a general assistant. Answer clearly and directly."],
  ["human", "{input}"],
]);

const supportChain = supportPrompt.pipe(model);
const billingChain = billingPrompt.pipe(model);
const generalChain = generalPrompt.pipe(model);
  1. Add a router function that classifies the input into one of three routes. In production you would usually replace this hardcoded logic with an LLM classifier, but this version is deterministic and good for learning the wiring.
type Route = "support" | "billing" | "general";

function routeInput(input: string): Route {
  const text = input.toLowerCase();

  if (
    text.includes("refund") ||
    text.includes("invoice") ||
    text.includes("charge") ||
    text.includes("payment")
  ) {
    return "billing";
  }

  if (
    text.includes("error") ||
    text.includes("broken") ||
    text.includes("login") ||
    text.includes("reset")
  ) {
    return "support";
  }

  return "general";
}
  1. Build the conditional dispatcher with RunnableLambda. This is the piece that makes the flow feel like one agent while still sending requests down different paths.
import { RunnableLambda } from "@langchain/core/runnables";

const routerChain = RunnableLambda.from(async (input: string) => {
  const route = routeInput(input);

  if (route === "billing") {
    return billingChain.invoke({ input });
  }

  if (route === "support") {
    return supportChain.invoke({ input });
  }

  return generalChain.invoke({ input });
});
  1. Run the router with sample inputs and print the responses. This gives you a simple CLI harness so you can verify each branch before plugging it into an API or agent.
async function main() {
  const queries = [
    "I was charged twice for my subscription.",
    "My login is broken after the password reset.",
    "Can you explain how your product works?",
  ];

  for (const query of queries) {
    const response = await routerChain.invoke(query);
    console.log("\nINPUT:", query);
    console.log("OUTPUT:", response.content);
  }
}

main().catch(console.error);
  1. If you want cleaner production code, wrap the route selection and execution into separate functions. That makes it easier to add logging, metrics, retries, or A/B tests later.
async function dispatch(input: string) {
  const route = routeInput(input);

  switch (route) {
    case "billing":
      return billingChain.invoke({ input });
    case "support":
      return supportChain.invoke({ input });
    default:
      return generalChain.invoke({ input });
  }
}

Testing It

Run the file with npx tsx your-file.ts or your preferred TypeScript runner. You should see each sample query routed to a different chain based on keywords in the input.

Check that billing terms like “refund” and “charged” land in the billing branch, while operational issues like “login” or “broken” land in support. If everything falls into the general branch, your keyword matching is too strict or your test inputs are too vague.

For real validation, add logs showing both the chosen route and the final model output. That makes it obvious whether failures come from routing logic or from prompt quality.

Next Steps

  • Replace keyword routing with an LLM-based classifier using structured output
  • Add confidence thresholds so low-confidence requests fall back to a human handoff path
  • Extend the router with more destinations like KYC, claims, onboarding, or fraud

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides