Search engine optimization, in its the majority of fundamental sense, trusts one thing above all others: Online search engine spiders crawling and indexing your site.
However almost every website is going to have pages that you do not want to include in this expedition.
In a best-case scenario, these are not doing anything to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more important pages.
Luckily, Google enables web designers to inform search engine bots what pages and content to crawl and what to neglect. There are several ways to do this, the most common being utilizing a robots.txt file or the meta robotics tag.
We have an outstanding and in-depth explanation of the ins and outs of robots.txt, which you must definitely read.
But in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exemption Procedure (REPRESENTATIVE).
Robots.txt offers crawlers with instructions about the website as an entire, while meta robotics tags consist of directions for particular pages.
Some meta robotics tags you may employ consist of index, which informs online search engine to include the page to their index; noindex, which tells it not to add a page to the index or include it in search engine result; follow, which advises a search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.
Both robots.txt and meta robotics tags work tools to keep in your tool kit, however there’s likewise another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for a whole page, as well as the particular components on that page.
And whereas utilizing meta robotics tags is fairly simple, the X-Robots-Tag is a bit more complex.
However this, of course, raises the concern:
When Should You Utilize The X-Robots-Tag?
According to Google, “Any directive that can be used in a robotics meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP response with both the meta robotics tag and X-Robots Tag, there are certain situations where you would want to use the X-Robots-Tag– the 2 most typical being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You want to serve instructions site-wide rather of on a page level.
For instance, if you want to block a specific image or video from being crawled– the HTTP action method makes this easy.
The X-Robots-Tag header is also useful due to the fact that it enables you to integrate numerous tags within an HTTP action or utilize a comma-separated list of instructions to specify regulations.
Possibly you don’t desire a particular page to be cached and want it to be unavailable after a certain date. You can use a combination of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these directions.
Essentially, the power of the X-Robots-Tag is that it is far more versatile than the meta robotics tag.
The benefit of utilizing an X-Robots-Tag with HTTP reactions is that it allows you to utilize regular expressions to perform crawl directives on non-HTML, along with use criteria on a larger, international level.
To help you understand the difference in between these instructions, it’s valuable to classify them by type. That is, are they crawler instructions or indexer regulations?
Here’s an useful cheat sheet to explain:
|Crawler Directives||Indexer Directives|
|Robots.txt– utilizes the user representative, permit, prohibit, and sitemap instructions to define where on-site online search engine bots are permitted to crawl and not allowed to crawl.||Meta Robots tag– permits you to specify and avoid online search engine from revealing specific pages on a website in search results page.
Nofollow– enables you to specify links that must not hand down authority or PageRank.
X-Robots-tag– permits you to manage how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you want to block specific file types. An ideal method would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be contributed to a site’s HTTP reactions in an Apache server setup via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds excellent in theory, however what does it appear like in the real life? Let’s have a look.
Let’s state we desired search engines not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would look like the listed below:
location ~ *. pdf$
Now, let’s take a look at a different situation. Let’s say we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:
Please note that understanding how these directives work and the effect they have on one another is vital.
For instance, what happens if both the X-Robots-Tag and a meta robotics tag lie when spider bots discover a URL?
If that URL is obstructed from robots.txt, then particular indexing and serving instructions can not be found and will not be followed.
If instructions are to be followed, then the URLs consisting of those can not be prohibited from crawling.
Look for An X-Robots-Tag
There are a couple of different methods that can be utilized to check for an X-Robots-Tag on the website.
The easiest way to check is to set up a browser extension that will tell you X-Robots-Tag details about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can utilize to identify whether an X-Robots-Tag is being utilized, for example, is the Web Developer plugin.
By clicking on the plugin in your browser and browsing to “View Action Headers,” you can see the various HTTP headers being used.
Another method that can be used for scaling in order to identify problems on sites with a million pages is Yelling Frog
. After running a site through Screaming Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which areas of the site are utilizing the tag, in addition to which specific regulations.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Comprehending and managing how search engines engage with your website is
the cornerstone of seo. And the X-Robots-Tag is a powerful tool you can utilize to do simply that. Just be aware: It’s not without its threats. It is extremely easy to make a mistake
and deindex your entire website. That said, if you read this piece, you’re probably not an SEO newbie.
So long as you use it sensibly, take your time and examine your work, you’ll discover the X-Robots-Tag to be a beneficial addition to your toolbox. More Resources: Included Image: Song_about_summer/ SMM Panel