Python Decorator for Preventing Robot Indexing
Much of my time at Mozilla has been spent catching up to the rest of the MDN team with respect to python. The new MDN backend, codenamed Kuma, is entirely Django-based and has been a joy to learn. My latest python adventures have been focused on increasing MDN's SEO score, a task which includes telling Google not to index given pages. In doing so, I created my first view decorator: a decorator that sends a response header which prevents a robot from indexing the given page.
The Python
The first step is importing the decorator dependencies:
from functools import wraps
The next step is creating the decorator definition:
def prevent_indexing(view_func): """Decorator to prevent a page from being indexable by robots""" @wraps(view_func) def _added_header(request, *args, **kwargs): response = view_func(request, *args, **kwargs) response['X-Robots-Tag'] = 'noindex' return response return _added_header
The original view is processed so that the response can be received; simple add a key of X-Robots-Tag to the response, value set to noindex, and the decorator is complete!
To use the decorator, simply add it to a given view:
@login_required @prevent_indexing def new_document(request): """ Does whatever 'new_document' should do; page should not be indexed """
Voila -- the X-Robots-Tag header will be sent so that robots wont index the page! Decorators allow for loads of additional functionality for any view, and are easily reusable; I'm glad to add them to my arsenal!