Have you ever required to protect against Google from indexing a individual URL on your internet web-site and displaying it in their research motor results webpages (SERPs)? If you deal with internet web sites lengthy more than enough, a working day will likely occur when you need to have to know how to do this.
The three strategies most frequently made use of to avoid the indexing of a URL by Google are as follows:
Utilizing the rel=”nofollow” attribute on all anchor elements applied to hyperlink to the site to avert the links from being adopted by the crawler.
Using a disallow directive in the site’s robots.txt file to reduce the web site from currently being crawled and indexed.
Employing the meta robots tag with the articles=”noindex” attribute to avert the page from becoming indexed.
Even though the variations in the 3 techniques seem to be delicate at very first look, the success can differ substantially based on which system you choose.
Utilizing rel=”nofollow” to stop Google indexing
Several inexperienced site owners attempt to stop Google from indexing a unique URL by using the rel=”nofollow” attribute on HTML anchor things. They increase the attribute to every single anchor aspect on their site employed to backlink to that URL.
Which includes a rel=”nofollow” attribute on a website link helps prevent Google’s crawler from subsequent the connection which, in flip, helps prevent them from identifying, crawling, and indexing the target webpage. Whilst this system might work as a quick-time period answer, it is not a feasible long-term alternative.
The flaw with this method is that it assumes all inbound links to the URL will contain a rel=”nofollow” attribute. The webmaster, nevertheless, has no way to avoid other net web pages from linking to the URL with a adopted hyperlink. So the prospects that the URL will ultimately get crawled and indexed applying this strategy is fairly high.
Using robots.txt to stop Google indexing
Another typical system used to protect against the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be added to the robots.txt file for the URL in query. Google’s crawler will honor the directive which will prevent the web page from becoming crawled and indexed. In some conditions, having said that, the URL can however show up in the SERPs.
At times Google will show a URL in their SERPs even though they have never indexed the contents of that website page. If google position check tool of net web-sites backlink to the URL then Google can generally infer the subject matter of the webpage from the backlink text of all those inbound links. As a result they will clearly show the URL in the SERPs for similar searches. Even though using a disallow directive in the robots.txt file will protect against Google from crawling and indexing a URL, it does not ensure that the URL will in no way surface in the SERPs.
Making use of the meta robots tag to reduce Google indexing
If you will need to avert Google from indexing a URL whilst also protecting against that URL from getting exhibited in the SERPs then the most effective solution is to use a meta robots tag with a material=”noindex” attribute within the head element of the world-wide-web page. Of class, for Google to basically see this meta robots tag they will need to very first be ready to explore and crawl the page, so do not block the URL with robots.txt. When Google crawls the web site and discovers the meta robots noindex tag, they will flag the URL so that it will under no circumstances be proven in the SERPs. This is the most effective way to stop Google from indexing a URL and exhibiting it in their research results.