LangChain Tutorial (Python): building prompt templates for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-prompt-templates-for-beginnerspython

This tutorial shows you how to build reusable prompt templates in LangChain for Python, then wire them into a simple chain you can test locally. You need this when you want your prompts to stop being hardcoded strings and start behaving like maintainable application code.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with Python functions, dictionaries, and classes
  • A terminal and a virtual environment

Install the packages first:

pip install langchain langchain-openai

Step-by-Step

  1. Start by importing the prompt template class and creating a simple template with variables. The goal is to separate the prompt structure from the runtime values you inject.
from langchain_core.prompts import PromptTemplate

template = PromptTemplate.from_template(
    "You are a helpful assistant for beginners.\n"
    "Explain {topic} in simple terms.\n"
    "Use exactly {num_points} bullet points."
)

print(template.input_variables)
  1. Fill the template with real values using .format(). This is the simplest way to see what LangChain is doing before you connect it to an LLM.
from langchain_core.prompts import PromptTemplate

template = PromptTemplate.from_template(
    "You are a helpful assistant for beginners.\n"
    "Explain {topic} in simple terms.\n"
    "Use exactly {num_points} bullet points."
)

prompt_text = template.format(topic="prompt templates", num_points=3)
print(prompt_text)
  1. Use partial variables when one value stays fixed across many calls. This is useful in production because it keeps your code cleaner and reduces repeated arguments.
from langchain_core.prompts import PromptTemplate

template = PromptTemplate.from_template(
    "You are a helpful assistant for beginners.\n"
    "Explain {topic} in simple terms.\n"
    "Use exactly {num_points} bullet points."
)

beginner_template = template.partial(num_points=3)
print(beginner_template.format(topic="LangChain prompt templates"))
  1. Connect the prompt template to an actual chat model using LCEL. This gives you a runnable chain that takes structured inputs and returns model output.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate

template = PromptTemplate.from_template(
    "You are a helpful assistant for beginners.\n"
    "Explain {topic} in simple terms.\n"
    "Use exactly {num_points} bullet points."
)

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = template | llm

response = chain.invoke({"topic": "prompt templates", "num_points": 3})
print(response.content)
  1. Make the prompt more structured with input validation and a stronger format contract. This matters when you want consistent outputs that downstream code can parse or display.
from langchain_core.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["topic", "audience"],
    template=(
        "Write a short explanation of {topic} for {audience}.\n"
        "Rules:\n"
        "- Use plain English\n"
        "- Give one example\n"
        "- Keep it under 80 words"
    ),
)

result = template.format(topic="LangChain prompt templates", audience="Python beginners")
print(result)
  1. Build a reusable function so your application code stays clean. In real projects, this is where you centralize prompt creation instead of scattering strings across files.
from langchain_core.prompts import PromptTemplate

def build_beginner_prompt(topic: str, audience: str = "Python beginners") -> str:
    template = PromptTemplate.from_template(
        "You are teaching {audience}.\n"
        "Explain {topic} clearly.\n"
        "Include one example and one common mistake."
    )
    return template.format(topic=topic, audience=audience)

print(build_beginner_prompt("prompt templates"))

Testing It

Run each snippet in order and confirm that format() returns the expected string before moving on to the model call. If you get an error on the ChatOpenAI step, check that OPENAI_API_KEY is set in your shell session.

For the chain step, verify that response.content prints natural language text instead of a raw object dump. If the output is too verbose or inconsistent, lower temperature to 0 like in the example.

A good sanity check is to change {topic} and make sure only that part changes while the rest of the prompt stays stable. That tells you your template is reusable and not accidentally hardcoded.

Next Steps

  • Learn ChatPromptTemplate for multi-message chat prompts with system, human, and assistant roles.
  • Add output parsers so your model responses become structured JSON or typed objects.
  • Combine prompt templates with memory or retrieval when your prompts need context from previous turns or documents.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides