A robots.txt file is a text file that tells web robots (also known as spiders or crawlers) which pages on your website to crawl and which to ignore.
When a robot crawls a website, it reads the robots.txt file to check for instructions on which pages it should crawl and which it should ignore.
When a crawler accesses a website, it requests for a file named "robots.txt". If such a file is found, the crawler checks it for the website indexation instructions.
The file is located in the Website's root directory that specifies for the Bots what pages and files you want or do not want them to crawl or index.
Website owners normally want to be noticed by the Search Engines
But there are cases when it is not wanted. For example if you store sensitive data, or you need to save bandwidth by not indexing sites with a multitude of images.
You can typically view the file by typing the full URL for the homepage and then adding /robots.txt
The file has no links so users will not stumble upon it, but most web crawler bots will look for this file first before crawling the rest of the site.
The most important use of a robots.txt file is to maintain privacy from the Internet.
Not everything on our website should be showed to the public or the Search Engines.
NOTE: There can be only one robots.txt file for the website. Robots.txt file for add-on domains or subdomains need to be placed in the corresponding document root.
The robots txt file is created in your web-site's root folder, "/yourwebsite.com/robot.txt"
• User-agent: [The name of the robot for which you are writing these rules]
• Disallow: [page, folder or path where you want to hide]
• Allow: [page, folder or path where you want to unhide]
• Sitemap: Used to call out the location of any XML sitemap(s) associated with this URL. Note this command is only supported by Google, Ask, Bing, and Yahoo.
• Crawl-delay: How many seconds a crawler should wait before loading and crawling page content. Note that Googlebot does not acknowledge this command, but crawl rate can be set in Google Search Console.
If you want to allow crawl everything, then use this code (All Search Engine)
If you want to Disallow to crawl everything (All search Engine)
If you want to Disallow for the specific folder (All search Engine)
Disallow: /folder name/
If you want to Disallow for the specific file (All search Engine)
If you want to Disallow for a folder but allow the crawling of one file in that folder (All search Engine)
Allow only one specific robot access in website
To exclude a single robot
If you want to allow for the sitemap file crawling
WordPress Robots txt File
Googlebot-Image (for images)
Googlebot-News (for news)
Googlebot-Video (for video)
MSNBot-Media (for images and video)
Baidu Web Search
Baidu Image Search
For a complete list see perishablepress.com of search engine bot user agent names
Tip: Do not disallow files in the robots txt file that you want Bots to crawl or especially to hide. By doing this you are telling everyone about those files. We would recommend putting them inside a folder and Hide that folder
Other common mistakes are typos, misspelled directories, user-agents, missing colons after "User-agent" and "Disallow", etc
When your robots.txt files gets complicated, it is easy for an error to slip in.
Tweet Share Pin Tumble Email