Have you ever required to avert Google from indexing a particular URL on your web web page and displaying it in their look for motor final results web pages (SERPs)? If you manage website web sites very long adequate, a day will probable occur when you have to have to know how to do this.
The a few methods most frequently used to stop the indexing of a URL by Google are as follows:
Employing the rel=”nofollow” attribute on all anchor things made use of to website link to the webpage to protect against the one-way links from becoming followed by the crawler.
Applying a disallow directive in the site’s robots.txt file to avoid the webpage from getting crawled and indexed.
Working with the meta robots tag with the articles=”noindex” attribute to reduce the webpage from getting indexed.
Whilst the dissimilarities in the three strategies surface to be delicate at initial glance, the performance can range substantially relying on which strategy you opt for.
Applying rel=”nofollow” to avert Google indexing
A lot of inexperienced site owners endeavor to avert Google from indexing a distinct URL by using the rel=”nofollow” attribute on HTML anchor factors. They incorporate the attribute to every single anchor component on their web-site employed to connection to that URL.
Including a rel=”nofollow” attribute on a link helps prevent Google’s crawler from pursuing the link which, in switch, stops them from getting, crawling, and indexing the concentrate on web page. While this system may well do the job as a brief-time period alternative, it is not a viable lengthy-time period option.
The flaw with this solution is that it assumes all inbound backlinks to the URL will include a rel=”nofollow” attribute. The webmaster, having said that, has no way to prevent other world-wide-web web sites from linking to the URL with a adopted url. So the prospects that the URL will at some point get crawled and indexed applying this method is very higher.
Applying robots.txt to protect against Google indexing
A further prevalent strategy utilized to avert the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be additional to the robots.txt file for the URL in dilemma. Google’s crawler will honor the directive which will reduce the web site from getting crawled and indexed. In some scenarios, even so, the URL can still surface in the SERPs.
Occasionally Google will screen a URL in their SERPs however they have never ever indexed the contents of that webpage. If enough web sites hyperlink to the URL then google serp data can typically infer the subject matter of the web site from the link textual content of these inbound one-way links. As a end result they will show the URL in the SERPs for connected queries. Even though employing a disallow directive in the robots.txt file will reduce Google from crawling and indexing a URL, it does not guarantee that the URL will under no circumstances look in the SERPs.
Employing the meta robots tag to avoid Google indexing
If you need to avert Google from indexing a URL although also avoiding that URL from remaining exhibited in the SERPs then the most efficient solution is to use a meta robots tag with a information=”noindex” attribute in just the head component of the world wide web website page. Of course, for Google to really see this meta robots tag they will need to 1st be in a position to discover and crawl the site, so do not block the URL with robots.txt. When Google crawls the web site and discovers the meta robots noindex tag, they will flag the URL so that it will by no means be revealed in the SERPs. This is the most successful way to avert Google from indexing a URL and exhibiting it in their research outcomes.
J Hodson operates Canonical Search engine marketing, an Search engine optimization consulting and training firm primarily based in Charlotte, NC. Canonical Search engine optimization providers clientele through the US.