When using a Robots.txt file, does the user agent string have to be exactly as it appears in my server logs?
For example when trying to match GoogleBot, can I just use googlebot
?
Also, will a partial-match work? For example just using Google
?
When using a Robots.txt file, does the user agent string have to be exactly as it appears in my server logs?
For example when trying to match GoogleBot, can I just use googlebot
?
Also, will a partial-match work? For example just using Google
?
Yes, the user agent has to be an exact match.
From robotstxt.org: "globbing and regular expression are not supported in either the User-agent or Disallow lines"
User-agent
directive in robots.txt
. eg. You never see User-agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
(which actually fails to match the googlebot according to Google's robots.txt tester). As far as the Googlebot is concerned, a partial-match is required: User-agent: Googlebot
. –
Defeatism At least for googlebot, the user-agent is non-case-sensitive. Read the 'Order of precedence for user-agents' section:
https://code.google.com/intl/de/web/controlcrawlindex/docs/robots_txt.html
(As already answered in another question)
In the original robots.txt specification (from 1994), it says:
User-agent
[…]
The robot should be liberal in interpreting this field. A case insensitive substring match of the name without version information is recommended.
[…]
But if/which parsers work like that is another question. Your best bet would be to look for the documentation of the bots you want to add. You’ll typically find the agent identifier string in it, e.g.:
Bing:
We want webmasters to know that
bingbot
will still honor robots.txt directives written formsnbot
, so no change is required to your robots.txt file(s).
DuckDuckBot is the Web crawler for DuckDuckGo. It respects WWW::RobotRules […]
The Google user-agent is (appropriately enough)
Googlebot
.
User Agent
archive.org_bot
is used for our wide crawl of the web. It is designed to respect robots.txt and META robots tags.
…
robots.txt is case-sensitive, although Google is more conservative than other bots, and may accept its string either way, other bots may not.
Also, will a partial-match work? For example just using
In theory, yes. However, in practise it seems to be specific partial-matches or "substrings" (as mentioned in @unor's answer) that match. These specific "substrings" appear to be referred to as "tokens". And often it must be an exact match for these "tokens".
With regards to the standard Googlebot, this only appears to match Googlebot
(case-insensitive). Any lesser partial-match, such as Google
, fails to match. Any longer partial-match, such as Googlebot/1.2
, fails to match. And using the full user-agent string (Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html
) also fails to match. (Although there is technically more than one user-agent for the Googlebot anyway, so matching on the full user-agent string would not be recommended anyway - even if it did work.)
These tests were performed with Google's robots.txt tester.
Reference:
robots.txt
)Yes, the user agent has to be an exact match.
From robotstxt.org: "globbing and regular expression are not supported in either the User-agent or Disallow lines"
User-agent
directive in robots.txt
. eg. You never see User-agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
(which actually fails to match the googlebot according to Google's robots.txt tester). As far as the Googlebot is concerned, a partial-match is required: User-agent: Googlebot
. –
Defeatism © 2022 - 2024 — McMap. All rights reserved.