Robots.txt File: Creation, Key Uses, and Best Practices Explained

Prevent Search Engines and Bots
From crawling or Indexing their sites

How does a robots.txt file work?

How To Use The Robots Text File

Updated: January 27, 2025
By: RSH Web Editorial Staff

Contact Us

Menu

The robots.txt file

A robots.txt file is a text file that tells search engine bots (also known as spiders or crawlers) which pages on your website to crawl and which to ignore.

When a bot crawls a website, it reads the robots.txt file to check for instructions on which pages it should crawl and which it should ignore.

Introduction

When a search engine "bot" or crawler accesses a website, it requests for a file named "robots.txt". If such a file is found, the crawler checks it for the website indexation instructions. The file is located in the Website's root directory that specifies for the Bots what pages and files you want or do not want them to crawl or index.

Website owners normally want to be noticed by the Search Engines
But there are cases when it is not wanted. For example if you store sensitive data, or you need to save bandwidth by not indexing sites with a multitude of images.

You can typically view the file by typing the full URL for the homepage and then adding /robots.txt
https://rshweb.com/robots.txt
The file has no links so users will not stumble upon it, but most web crawler bots will look for this file first before crawling the rest of the site.

Why Do You Need a Robots.txt File?

The Robots.txt file serves a few primary purposes. It helps you manage the behavior of search engine crawlers, including which pages and directories they can and cannot access. While this file does not guarantee that crawlers will always obey its instructions, most major search engines respect it.

Benefits of Using a Robots.txt File:

  • • Prevent Indexing of Sensitive Content: You can block crawlers from accessing pages that are private, irrelevant, or shouldn’t be indexed by search engines, such as login pages, checkout pages, or admin sections.
  • • Improve Crawl Efficiency: By directing crawlers away from less important pages, you ensure they focus on high-value content, which can optimize your site’s overall performance.
  • • Avoid Duplicate Content Issues: You can use the Robots.txt file to prevent crawlers from indexing duplicate pages, ensuring that only the most relevant version of your content gets indexed.

Roles of the robots.txt file

The robots.txt file plays several crucial roles in website management and search engine optimization. Primarily, it serves as a communication tool between website owners and search engine crawlers, instructing them on which parts of the site to crawl and index. By specifying which pages or directories to allow or disallow, website administrators can control how their content appears in search engine results. Additionally, the robots.txt file can be used to protect sensitive or private information from being indexed, ensuring data security and compliance with privacy regulations. Overall, understanding and properly utilizing the robots.txt file is essential for optimizing website visibility, managing crawling behavior, and safeguarding content integrity.

Uses of a "robots.txt" file

The most important use of a robots.txt file is to maintain privacy from the Internet.

A robots.txt file guides search engine crawlers on which parts of a website to index or avoid. It helps manage site visibility, optimize SEO, protect sensitive content, and control crawler traffic, ensuring better performance and efficient interaction with search engines.

NOTE: A website can have only one robots.txt file. For multiple or additional domains and subdomains, the robots.txt file must be placed in the respective document root of each domain or subdomain.

Personal Info Safe

The Importance of the robots.txt in SEO Strategy

In the realm of SEO strategy, the robots.txt file holds significant importance as a guiding force for search engine crawlers. By strategically configuring this file, website owners can influence how search engines discover and index their content. Precise directives within the robots.txt can ensure that valuable pages are prioritized for crawling while irrelevant or sensitive areas are excluded. This level of control not only streamlines the indexing process but also enhances the visibility of desired content in search engine results pages (SERPs). Proper utilization of the robots.txt file contributes to improved website rankings, efficient crawling, and ultimately, a stronger SEO performance overall.

How to Create a "robots.txt" file

The robots txt file is created in your web-site's root folder: /public_html/robot.txt

A "robots.txt" text file is basically just a simple text file made with any text editor such as Notepad++, and then can be uploaded to your website with a FTP program

You can also use the cPanel File Manager to create this file right on your website
Also see Setting up and using the FTP Interface in cPanel

The Basic Syntax for the robots txt file

• User-agent: [The name of the robot for which you are writing these rules]

• Disallow: [page, folder or path where you want to hide]

• Allow: [page, folder or path where you want to unhide]

• Sitemap: Used to call out the location of any XML sitemap(s) associated with this URL. Note this command is only supported by Google, Ask, Bing, and Yahoo.

• Crawl-delay: How many seconds a crawler should wait before loading and crawling page content. Note that Googlebot does not acknowledge this command, but crawl rate can be set in Google Search Console.

Example 1

If you want to allow crawl everything, then use this code (All Search Engine)
User-agent: *
Disallow:

Example 2

If you want to Disallow to crawl everything (All search Engine)
User-agent: *
Disallow: /

Example 3

If you want to Disallow for the specific folder (All search Engine)
User-agent: *
Disallow: /folder name/

Example 4

If you want to Disallow for the specific file (All search Engine)
User-agent: *
Disallow: /filename.html

Example 5

If you want to Disallow for a folder but allow the crawling of one file in that folder (All search Engine)
User-agent: *
Disallow: /folderxyz/
Allow: /folderxyz/anyfile.html

Example 6

Allow only one specific robot access in website
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow:

Example 7

To exclude a single robot
User-agent: BadBotName
Disallow: /

Example 8

If you want to allow for the sitemap file crawling
User-agent: *
Sitemap: http://www.yourwebsite.com/sitemap.xml

Example 9

WordPress Robots txt File
User-agent: *
Disallow: /feed/
Disallow: /trackback/
Disallow: /wp-admin/
Disallow: /wp-content/
Disallow: /wp-includes/
Disallow: /readme.html
Disallow: /xmlrpc.php
Allow: /wp-content/uploads/
Sitemap: https://yourwebsite.com/sitemap.xml

Quality Content

WordPress Plugins For The Robots.txt

With WordPress plugins designed specifically for creating and editing Robots.txt files, you can control how search engines crawl your site—without any coding. Block sensitive pages, prevent duplicate content, and optimize crawl efficiency, all from your WordPress dashboard. These plugins are perfect for both beginners and experienced users who want to improve their site’s SEO effortlessly.

  • PC Robots.txt The PC Robots.txt plugin for WordPress simplifies managing your robots.txt file, enabling custom rules to control search engine crawlers. It supports adding, editing, or deleting directives directly from the WordPress dashboard, ensuring better SEO optimization and site indexing while remaining user-friendly and efficient for all experience levels.
  • Robots.txt Editor The Robots.txt Editor plugin for WordPress offers an easy way to customize your robots.txt file directly from the dashboard. Enhance SEO by managing crawler directives effortlessly, ensuring optimal site indexing and improved search engine visibility.
  • Robots.txt Quick Editor The Robots.txt Quick Editor plugin for WordPress allows fast and simple editing of your robots.txt file. Easily manage crawler directives to optimize SEO and control search engine indexing directly from your dashboard, ensuring improved site visibility and performance.
  • Booter The Booter Bots Crawlers Manager plugin for WordPress helps manage and block unwanted bots and crawlers. It improves site performance, enhances security, and optimizes SEO by allowing you to control access to your website effectively and efficiently
  • Multisite Robots.txt Manager The Multisite Robots.txt Manager plugin for WordPress simplifies robots.txt management across multisite networks. Customize directives for individual sites or the entire network, improving SEO and crawler control while streamlining administration from a single, user-friendly dashboard
  • Multipart robots.txt editor The Multipart Robots.txt Editor plugin for WordPress allows seamless customization of your robots.txt file. Easily manage directives, optimize SEO, and control crawler access directly from your dashboard, ensuring improved site visibility and performance with a user-friendly interface.
  • Advanced Robots.txt Optimizer & Editor The Advanced Robots.txt Optimizer Editor plugin for WordPress offers powerful tools to customize and optimize your robots.txt file. Enhance SEO, control crawler access, and improve site indexing effortlessly with its advanced features and intuitive interface.
  • Companion Sitemap & Robots.txt Generator The Companion Sitemap Generator plugin for WordPress creates XML sitemaps to enhance search engine indexing. Automatically generate and manage sitemaps, improving SEO and ensuring your content is easily discoverable by search engines, all through a simple, user-friendly interface.
  • MetaRobots by SEO-Sign The Meta Robots by SEO Sign plugin for WordPress enables precise control over meta robots tags. Easily manage indexing, follow directives, and improve SEO by customizing how search engines interact with your site's pages and content.
  • SEOPress automagically generate meta title, robots.txt, more Write a short in 40 words or less paragraph about: https://wordpress.org/plugins/wp-seopress/

Best Practices for Using Robots.txt

While the Robots.txt file is a powerful tool, improper use can cause issues with search engine crawling and indexing. Here are some best practices to help you make the most of it.

  • • Be Specific with Directives: To ensure your instructions are clear and avoid unintended consequences, be as specific as possible when using Disallow and Allow directives. For example, rather than blocking an entire directory, block only specific pages if necessary.
  • • Keep the Robots.txt File Simple and Organized: Your Robots.txt file should be easy to read and understand. Use comments (preceded by #) to explain each rule or section. This can be helpful for future updates or for other team members working on your site.
  • • Test Your Robots.txt File: Before finalizing your Robots.txt file, always test it to ensure that the correct pages are being blocked or allowed. Google’s Robots.txt Tester tool (available in Google Search Console) allows you to test your file and make sure it’s functioning as intended.
  • • Don’t Rely Solely on Robots.txt for Security: The Robots.txt file is not a security tool. While it can prevent search engine crawlers from indexing certain content, it does not prevent users or malicious bots from accessing those pages. For sensitive content, use proper security measures like password protection.
  • • Keep It Updated: As your site evolves and grows, it’s important to keep your Robots.txt file updated. Regularly review and modify it to reflect changes to your site structure, content, and any new pages or features you may want to block from crawlers.

Common Mistakes to Avoid

While the Robots.txt file is a helpful tool, it’s easy to make mistakes that can have unintended consequences. Here are some common mistakes to avoid:

  • • Blocking important pages: Accidentally blocking essential pages like your homepage or key product pages can result in your content being excluded from search engine indexes.
  • • Over-blocking: If you block too many pages, you might prevent search engines from crawling important sections of your site.
  • • Not using the correct syntax: Incorrect formatting or syntax errors in your Robots.txt file can result in search engines ignoring your directives.

Common Search Engine bot User Agent Name

Google:
Googlebot
Googlebot-Image (for images)
Googlebot-News (for news)
Googlebot-Video (for video)

Bing
Bingbot
MSNBot-Media (for images and video)

DuckDuckGo
DuckDuckBot
DuckDuckBot/1.0; (+http://duckduckgo.com/duckduckbot.html)

Facebook
Facebook
facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
facebookexternalhit/1.1
facebookcatalog/1.0

Yahoo
Yahoo
Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)

AOL
AOL.com
Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts)
Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.27; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727)

Baidu
Baiduspider
Baidu Web Search
Baidu Image Search

For a complete list see perishablepress.com of search engine bot user agent names

Tip: Do not disallow files in the robots txt file that you want Bots to crawl or especially to hide. By doing this you are telling everyone about those files. We would recommend putting them inside a folder and Hide that folder
Other common mistakes are typos, misspelled directories, user-agents, missing colons after "User-agent" and "Disallow", etc. When your robots.txt files gets complicated, it is easy for an error to slip in.

Summary

The robots.txt file serves as a communication tool between website administrators and search engine crawlers, providing instructions on which pages or directories should be crawled and indexed. It plays a crucial role in managing a website's visibility in search engine results and protecting sensitive or private content from being indexed. By carefully configuring the robots.txt file, website owners can optimize their SEO Strategy, ensuring that their most important pages are prioritized for crawling while excluding irrelevant or duplicate content. Overall, understanding and properly utilizing the robots.txt file is essential for effective website management and search engine optimization.

Author Bio:

A esteemed contributor in the realms of technology and business. With a distinguished career marked by leadership roles within Fortune 500 companies...

 Tweet  Share  Pin  Email

We welcome your comments, questions, corrections and additional information relating to this article. Please be aware that off-topic comments will be deleted.
If you need specific help with your account, feel free to contact us anytime
Thank you

Freshly brewed by our master copywriters

Our complete website hosting from personal, professional to business
and providing secure, reliable, and feature-rich hosting services at an affordable price