@pikcaaJun 21.2017 — #Robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. The robots.txt file is part of the the robots exclusion protocol (REP), a group of web standards that regulate how robots crawl the web, access and index content, and serve that content up to users. The REP also includes directives like meta robots, as well as page-, subdirectory-, or site-wide instructions for how search engines should treat links (such as “follow” or “nofollow”)
@jedaisoulJun 21.2017 — #Robots.txt serves little or no SEO purpose. It merely excludes those parts of the site (if any) that you want the search bots to ignore. However, given that well-behaved bots will only crawl pages that are directly or indirectly linked to the home page, why would you link such pages anyway? Whereas, badly behaved bots are unlikely to obey the robots.txt exclusions. Indeed it may assist them by identifying private areas on the site!
@vinborisJun 22.2017 — #Robots.txt serves a little purpose where some one can mention what pages are to be crawled by the crawler and what not. Suppose you have a e-commerce website and did not want to share information about the checkout page of the website where the sales and purchase happened at that place robots.txt is helpful.
@RH-CalvinJun 22.2017 — #Robots.txt is a text file that lists webpages which contain instructions for search engines robots. The file lists webpages that are allowed and disallowed from search engine crawling.
@smithjohn543Jun 24.2017 — #Robots.txt is a text file, and it is used to exclude/block sensitive info, web pages or directories of any website from searching.
If you don't want to show or index a web page/directory during searching on Google or other search engines, then you can mention it on robots.txt file.
@Abhi71Jun 24.2017 — #Robots.txt is a file where we give instructions to crawler which page to crawl and which not.
e.g. pages like [b]**Links removed by Site Staff so it doesn't look like you're spamming us. Please don't post them again.** [/b], disclaimer and privacy policy.
We can stop crawler for crawling our website by giving instruction in Robots.txt file.
@stuartspindlow0Jun 26.2017 — #Robots.txt is a text file which gives instruction to the search engine crawler to which page has to be crawled of not by allowing or disallowing.
@jennypitulaJun 26.2017 — #Robots.txt is a text file which is used for instructing search engine which pages or folders should be crawled or which should not be crawled. If you do not want to get indexing of a page or folder you may easily mention it in Robots.txt ; in this way crawler will only crawl which is required to crawl. So in this way indexing process can be faster.
@DipikaJun 28.2017 — #Robots.txt is text files mainly report to crawler that it is permitted any website directory. The use of robots.txt is to disallow crawlers from visit private folder or content that gives them no extra information.
@HostingsafetyJun 29.2017 — #Robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website.