LangChain Tutorial (Python): building prompt templates for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-prompt-templates-for-intermediate-developerspython

This tutorial shows you how to build reusable prompt templates in LangChain with Python, then wire them into a small chain you can actually ship. You need this when your prompts stop being one-off strings and start needing structure, variable injection, formatting control, and consistent outputs.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with Python functions and dictionaries
  • A terminal and a virtual environment

Install the packages:

pip install langchain langchain-openai

Step-by-Step

  1. Start with a PromptTemplate for simple string interpolation.
    This is the base pattern: define variables once, then reuse the template across requests.
from langchain_core.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["product", "audience"],
    template=(
        "Write a concise product description for {product}. "
        "Target audience: {audience}."
    ),
)

prompt_text = template.format(
    product="a fraud detection dashboard",
    audience="bank operations managers",
)

print(prompt_text)
  1. Use partial variables when part of the prompt should stay fixed.
    This is useful for system-like instructions that should not be passed in every call.
from langchain_core.prompts import PromptTemplate

base_template = PromptTemplate(
    input_variables=["topic"],
    partial_variables={"tone": "direct, technical, and concise"},
    template=(
        "Write about {topic} in a {tone} style. "
        "Focus on implementation details."
    ),
)

print(base_template.format(topic="prompt templates in LangChain"))
  1. Build chat prompts with separate message roles.
    For LLMs that expect conversational structure, ChatPromptTemplate is the right tool.
from langchain_core.prompts import ChatPromptTemplate

chat_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a senior Python engineer writing for developers."),
    ("human", "Explain {concept} with one example."),
])

messages = chat_prompt.format_messages(concept="LangChain prompt templates")

for message in messages:
    print(f"{message.type}: {message.content}")
  1. Add output constraints directly into the template so the model has less room to wander.
    In production, this reduces cleanup work and makes downstream parsing easier.
from langchain_core.prompts import PromptTemplate

structured_template = PromptTemplate(
    input_variables=["issue"],
    template=(
        "Analyze this issue: {issue}\n"
        "Return exactly these sections:\n"
        "1. Root cause\n"
        "2. Impact\n"
        "3. Recommended fix\n"
        "Keep each section under 2 sentences."
    ),
)

print(structured_template.format(
    issue="The model returns inconsistent JSON fields across requests."
))
  1. Connect the prompt template to an LLM chain and run it end-to-end.
    This is where templates become useful: they turn into reusable inputs for actual model calls.
import os
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant for insurance software teams."),
    ("human", "Summarize {topic} in 3 bullet points."),
])

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

chain = prompt | llm

response = chain.invoke({"topic": "policy document versioning"})
print(response.content)
  1. Reuse the same prompt with multiple inputs instead of rebuilding strings manually.
    This keeps your code testable and makes prompt changes centralized.
topics = [
    "claims triage automation",
    "fraud scoring thresholds",
    "customer onboarding workflows",
]

for topic in topics:
    result = chain.invoke({"topic": topic})
    print("=" * 40)
    print(result.content)

Testing It

Run the script and confirm that each step prints a formatted prompt or model response without raising exceptions. If you see ChatOpenAI errors, check that OPENAI_API_KEY is set in your environment and that your network allows outbound API calls.

For prompt-only steps, verify that variable substitution is correct and no placeholders like {topic} remain unresolved. For the chain step, make sure the response matches the instruction format you gave it, especially if you asked for bullets or sections.

If you want stricter validation, add assertions around the formatted text before sending it to the model. That catches missing variables early, which matters when prompts are generated dynamically from user input or config files.

Next Steps

  • Learn MessagesPlaceholder so you can inject conversation history into chat prompts.
  • Add structured output parsing with LangChain output parsers.
  • Move from single prompts to runnable chains with branching and retries.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides