LangChain Tutorial (Python): implementing guardrails for advanced developers
This tutorial shows how to add practical guardrails to a LangChain Python agent: input validation, output validation, and safe tool execution. You need this when your agent touches user-controlled data, calls external APIs, or can trigger side effects you do not want to trust blindly.
What You'll Need
- •Python 3.10+
- •A virtual environment
- •
langchain - •
langchain-openai - •
pydantic - •
python-dotenv - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangChain chat models and prompt templates
Step-by-Step
- •Start by installing the packages and wiring up your environment. Keep the dependencies minimal so the guardrail logic stays easy to audit.
pip install langchain langchain-openai pydantic python-dotenv
- •Define strict schemas for what the model is allowed to accept and return. This is your first guardrail: if the input does not match your contract, do not send it downstream.
from dotenv import load_dotenv
from pydantic import BaseModel, Field, ValidationError, constr
load_dotenv()
class SupportRequest(BaseModel):
customer_id: constr(min_length=3, max_length=20)
issue: constr(min_length=10, max_length=500)
class SupportResponse(BaseModel):
category: str = Field(pattern="^(billing|technical|account)$")
summary: constr(min_length=20, max_length=300)
escalate: bool
- •Build a chain that produces structured output instead of raw text. LangChain can parse directly into a Pydantic model, which makes downstream validation much cleaner than regex-based checks.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a support triage assistant. Classify the issue and decide whether to escalate."),
("human", "Customer ID: {customer_id}\nIssue: {issue}")
])
structured_llm = llm.with_structured_output(SupportResponse)
triage_chain = prompt | structured_llm
- •Add an input guard function before invoking the chain. This blocks malformed requests early and gives you a single place to enforce business rules like ID format or length limits.
def validate_request(payload: dict) -> SupportRequest:
return SupportRequest.model_validate(payload)
good_payload = {
"customer_id": "CUST123",
"issue": "I was charged twice for my subscription this month."
}
try:
request = validate_request(good_payload)
result = triage_chain.invoke(request.model_dump())
print(result)
except ValidationError as e:
print("Rejected request:", e)
- •Add an output guard after the model responds. Even with structured output, you still want a final policy check before the result reaches another system or an operator dashboard.
def enforce_response_policy(response: SupportResponse) -> SupportResponse:
if response.category == "billing" and not response.escalate:
raise ValueError("Billing issues must be escalated.")
return response
response = triage_chain.invoke(request.model_dump())
safe_response = enforce_response_policy(response)
print(safe_response.model_dump())
- •If your agent uses tools, wrap them with explicit permission checks. Do not let the model call side-effecting functions directly without verifying intent and scope first.
from langchain_core.tools import tool
@tool
def create_ticket(customer_id: str, category: str, summary: str) -> str:
"""Create a support ticket."""
return f"ticket_created:{customer_id}:{category}"
def guarded_create_ticket(data: SupportResponse, customer_id: str) -> str:
if data.escalate is False:
raise PermissionError("Ticket creation blocked because escalation is false.")
return create_ticket.invoke({
"customer_id": customer_id,
"category": data.category,
"summary": data.summary,
})
ticket_id = guarded_create_ticket(safe_response, request.customer_id)
print(ticket_id)
Testing It
Run one valid payload and confirm you get a parsed SupportResponse plus a ticket ID only when escalate=True. Then try invalid inputs like a short issue string or an empty customer_id and verify Pydantic rejects them before the LLM is called.
You should also test policy failures separately from schema failures. For example, if the model returns "billing" with escalate=False, your post-check should raise immediately.
A good production test suite includes:
- •Schema rejection tests
- •Policy enforcement tests
- •Tool permission tests
- •Mocked LLM responses for deterministic assertions
Next Steps
- •Add JSON Schema-based validators for more complex payloads and nested objects.
- •Move policy checks into a dedicated guardrail layer so multiple chains can reuse them.
- •Add LangSmith tracing so you can inspect where validation fails in real runs
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit