LangChain Tutorial (Python): building prompt templates for advanced developers
This tutorial shows how to build reusable, structured prompt templates in LangChain for Python, then compose them into a pattern you can use in production systems. You need this when one-off prompts start breaking under versioning, localization, role separation, or when different teams need the same prompt logic with different inputs.
What You'll Need
- •Python 3.10+
- •
langchainandlangchain-openai - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangChain
PromptTemplateand chat models - •A terminal and a virtual environment
Install the packages:
pip install langchain langchain-openai
Step-by-Step
- •Start with a plain text template when you need deterministic string formatting. This is the simplest way to separate prompt text from runtime values.
from langchain_core.prompts import PromptTemplate
template = PromptTemplate.from_template(
"You are a compliance analyst.\n"
"Summarize this policy for a {audience} in {tone} tone:\n\n{policy_text}"
)
prompt = template.format(
audience="bank operations staff",
tone="clear and concise",
policy_text="Customers must verify identity before account changes."
)
print(prompt)
- •Move to chat templates when the model should receive role-separated messages. This is the better default for modern chat models because system instructions stay isolated from user input.
from langchain_core.prompts import ChatPromptTemplate
chat_prompt = ChatPromptTemplate.from_messages([
("system", "You are a senior insurance assistant. Follow policy exactly."),
("human", "Explain this clause for a {audience}:\n\n{clause}")
])
messages = chat_prompt.format_messages(
audience="claims adjusters",
clause="Coverage excludes losses caused by intentional misrepresentation."
)
for message in messages:
print(f"{message.type}: {message.content}")
- •Add validation-friendly structure with partial variables. This is useful when part of the prompt never changes, such as company policy, output rules, or jurisdiction-specific constraints.
from langchain_core.prompts import ChatPromptTemplate
base_prompt = ChatPromptTemplate.from_messages([
("system", "You are an internal risk assistant."),
("system", "Always answer using this format: {format_instructions}"),
("human", "Review the following case:\n\n{case_text}")
])
prompt = base_prompt.partial(
format_instructions="1) Risk level 2) Reason 3) Recommended action"
)
print(prompt.format(case_text="A customer requests an unusual wire transfer at 2 AM."))
- •Chain templates together when one prompt feeds another. This pattern is common in production pipelines where you first extract facts, then generate a final response from those facts.
from langchain_core.prompts import PromptTemplate
extract_template = PromptTemplate.from_template(
"Extract key entities from this text as bullet points:\n\n{text}"
)
summary_template = PromptTemplate.from_template(
"Using these entities, write a concise operational summary:\n\n{entities}"
)
text = "The client called at 9 PM to change beneficiaries and requested same-day processing."
entities = extract_template.format(text=text)
summary_prompt = summary_template.format(entities=entities)
print(entities)
print("\n---\n")
print(summary_prompt)
- •Execute the prompt against a real chat model so you can verify the template works end to end. Use
ChatOpenAIdirectly if you want the smallest possible path from template to response.
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a precise technical writer."),
("human", "Rewrite this for an executive audience:\n\n{text}")
])
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
chain = prompt | llm
response = chain.invoke({"text": "The service failed because retries were not configured correctly."})
print(response.content)
Testing It
Run each snippet independently first to confirm formatting behaves as expected before wiring it into a chain. Check that every placeholder in your template has a matching variable name; missing keys will fail fast, which is exactly what you want in development.
When testing with an LLM, keep temperature=0 so prompt changes are easier to compare across runs. If output quality shifts unexpectedly, inspect the rendered prompt first; most failures come from bad message ordering, missing context, or overly broad instructions.
For production-style validation, print both the formatted prompt and final model output during local testing. That gives you a clean diff when someone edits the template later.
Next Steps
- •Learn
MessagesPlaceholderfor injecting conversation history into chat prompts. - •Add structured outputs with
PydanticOutputParserso downstream code gets typed data. - •Combine prompt templates with LCEL branching for routing between different prompt paths based on document type or risk level.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit