Google Stop Unsupported rules in Robots.txt

In last 25 years REP (Robots Exclusion Protocol) was only a de-facto standard. This had really a frustrating implementation sometimes. Now Google announced that we are open sourcing Google’s production robots.txt parser.

So we can say that Google now officially allow that 1st Sep-2019 onwards GoogleBot will no longer obey a robots.txt directive relative to indexing. So webmaster need to remove noindex directive from robots.txt and use other alternative as suggested by Google.



Google says that the Noindex robots.txt directive is not supported because it’s not an official directive. Earlier Google bot supported that but now it’s not supported.

Google Mostly used to obey Noindex Directive –

Last couple of years Google obey robots.txt file for noindex but now Google announce that it will not be no longer supported, below is the code which we used to noindex whole website by robots.txt

User-agent: *

Disallow: /

If you want to index your whole website and allow all robots for your website then you need to use below code

User-agent: *

Disallow:

Now Google says both the option is outdated and you should use below method for manage your indexing.

Ø  Noindex in robots meta tags

Ø  404 and 410 HTTP Status Code

Ø  Password Protection

Ø  Disallow in robots.txt

Ø  Search Console Removal Tool




Read Google official announcement at per below link

https://webmasters.googleblog.com/2019/07/a-note-on-unsupported-rules-in-robotstxt.html

Read official tweet here at

https://twitter.com/googlewmc/status/1145950977067016192