Meta tag vs robots.txt
Asked Answered
L

11

30
  1. Is it better to use meta tags* or the robots.txt file for informing spiders/crawlers to include or exclude a page?

  2. Are there any issues in using both the meta tags and the robots.txt?

*Eg: <#META name="robots" content="index, follow">

Lanciform answered 27/7, 2010 at 21:39 Comment(2)
This is a programming related question in terms of web development.Blockage
It is preferred if you can post separate questions instead of combining your questions into one. That way, it helps the people answering your question and also others hunting for at least one of your questions. Thanks!Zeuxis
M
7

Robots.txt IMHO.

The Meta tag option tells bots not to index individual files, whereas Robots.txt can be used to restrict access to entire directories.

Sure, use a Meta tag if you have the odd page in indexed folders that you want skipping, but generally, I'd recommend you most of your non-indexed content in one or more folders and use robots.txt to skip the lot.

No, there isn't a problem in using both - if there is a clash, in general terms, a deny will overrule an allow.

Monoculture answered 27/7, 2010 at 21:49 Comment(3)
Although I tend to go for Robots.txt myself as well, is it not possible that dodgy robots could just use that file to get a convenient list of new directories it can spider? Whereas with the META tag, they'd have no way of finding a non-linked page in the first place... Just a thought!Crusty
@Crusty That may be true, but that is way you should not display sensitive information to unauthorized users. robots.txt is used to instruct crawlers what information is not worth while rather than what is private and must not be accessed.Syncopation
I recommend all visitors to this page scroll down and check the next answer via @Benjamin, as it links to Google's documentation! https://mcmap.net/q/468456/-meta-tag-vs-robots-txtAlban
M
54

There is one significant difference. According to Google they will still index a page behind a robots.txt DENY, if the page is linked to via another site.

However, they will not if they see a metatag:

While Google won't crawl or index the content blocked by robots.txt, we might still find and index a disallowed URL from other places on the web. As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results. You can stop your URL from appearing in Google Search results completely by using other URL blocking methods, such as password-protecting the files on your server or using the noindex meta tag or response header.

Mufi answered 19/8, 2013 at 14:27 Comment(5)
And according to these 1, 2, 3 pages, it's not just google. In general, the meta tag is used to disallow indexing, whereas robots.txt is used to disallow crawling.Charissacharisse
+1, and I took the liberty to update your post with a quote from the linked page, should its contents change!Wollongong
@Charissacharisse If I follow you correctly, it is the case that – if we want no crawling or indexing of a page – then we should both block the page on robots.txt and use the noindex, nofollow tag in the page's meta tags. Sorry for the seeming redundant question here, some of the citation links you have provided are now dead.Pelkey
@Pelkey no worries at all, in fact it is an excellent question and the answer is non-obvious. On this page google tells us that if we follow your suggestion of blocking it with both robots.txt and meta tags, if it was already indexed your changed meta tag would be ignored and thus it would remain indexed, because google was not allowed to crawl the page to see the new tag! So the answer is always provide the meta tag. You can provide the robots entry (to reduce requests) once you know it's been dropped, or if it was never indexed.Charissacharisse
@Charissacharisse I've been looking at this subject off and on for some months, and this is by far the clearest and most concise answer. You've set forth the simple logic which I had mistaken for a Google paradox! Sincere thanks for this clarification.Pelkey
M
7

Robots.txt IMHO.

The Meta tag option tells bots not to index individual files, whereas Robots.txt can be used to restrict access to entire directories.

Sure, use a Meta tag if you have the odd page in indexed folders that you want skipping, but generally, I'd recommend you most of your non-indexed content in one or more folders and use robots.txt to skip the lot.

No, there isn't a problem in using both - if there is a clash, in general terms, a deny will overrule an allow.

Monoculture answered 27/7, 2010 at 21:49 Comment(3)
Although I tend to go for Robots.txt myself as well, is it not possible that dodgy robots could just use that file to get a convenient list of new directories it can spider? Whereas with the META tag, they'd have no way of finding a non-linked page in the first place... Just a thought!Crusty
@Crusty That may be true, but that is way you should not display sensitive information to unauthorized users. robots.txt is used to instruct crawlers what information is not worth while rather than what is private and must not be accessed.Syncopation
I recommend all visitors to this page scroll down and check the next answer via @Benjamin, as it links to Google's documentation! https://mcmap.net/q/468456/-meta-tag-vs-robots-txtAlban
A
4

Both are supported by all crawlers which respect webmasters wishes. Not all do, but against them neither technique is sufficient.

You can use robots.txt rules for general things, like disallow whole sections of your site. If you say Disallow: /family then all links starting with /family are not indexed by a crawler.

Meta tag can be used to disallow a single page. Pages disallowed by meta tags do not affect sub pages in the page hierarchy. If you have meta disallow tag on /work, it does not prevent a crawler from accessing /work/my-publications if there is a link to it on an allowed page.

Audible answered 27/7, 2010 at 21:50 Comment(0)
A
1

meta is superior.

In order to exclude individual pages from search engine indices, the noindex meta tag is actually superior to robots.txt.

Apollus answered 15/2, 2014 at 16:57 Comment(0)
T
1

There is a very huge difference between meta robot and robots.txt.

In robots.txt, we ask crawlers which page you have to crawl and which one you have to exclude but we don't ask crawler to not to index those excluded pages from crawling.

But if we use meta robots tag, we can ask search engine crawlers not to index this page.The tag to be used for this is:

<#meta name = "robot name", content = "noindex"> (remove #)

OR

<#meta name = "robot name", content = "follow, noindex"> (remove #)

In the second meta tag, I have asked robot to follow that URL but not to index in search engine.

Terrance answered 18/7, 2014 at 12:23 Comment(0)
P
1

Here is my knowledge about them. I am talking about their work area. Both we can use for blocking content.

The difference between both is:

  • Meta Robot can block a single page with some piece of the code paste in the header of the website. By using the meta robot tag we tell the search engine for which function we are using meta tag.
  • In Robots.txt file you can block the whole website.

Here is the example of meta robot:

<meta name="robots" content="index, follow"> 
<meta name="robots" CONTENT="all">
<meta name="robots" content="noindex, follow">
<meta name="robots" content="noindex, nofollow">
<meta name="robots" content="index, nofollow" />
<meta name="robots" content="noindex, nofollow" />

Here is the example of Robots.txt file:

Allowing crawlers to crawl all website

user-agent: *
Allow:
Disallow:

Disallowing crawlers to crawl all website

user-agent: *
Allow:
Disallow:/
Patricapatrice answered 4/3, 2019 at 13:47 Comment(0)
T
0

I would probably use robots.txt over the meta tag. Robots.txt has been around longer, and might be more widely supported (But I am not 100% sure on that).

As for the second part, I think most spiders will take whatever is the most restrictive setting for a page - if there is a disparity between the robots.txt and meta tag.

Town answered 27/7, 2010 at 21:42 Comment(0)
A
0

Robots.txt is good for pages which consume a lot of your crawling budget like internal search or filters with infinite combination. If you allow Google to index yoursite.com/search=lalalala it will waste you crawling budget.

Antipole answered 23/1, 2014 at 17:3 Comment(2)
You still can disallow that using meta-tags, right? But the question was what is the difference between this approach and robots.txt.Raptorial
I don't think it is the same. If your rules are in robots.txt a crawler would just have to periodically load robots.txt in order to have an up-to-date view of what it's allowed to crawl. If your rules are in meta tags it would have to load every tagged page periodically to have an up-to-date view of the rules.Dendro
C
0

You want to use 'noindex,follow' in a robots meta tag, rather than robots.txt, because it will allow the link juice to pass through. It is better from a SEO perspective.

Chloric answered 12/8, 2014 at 18:31 Comment(0)
A
0

Is it better to use meta tags* or the robots.txt file for informing spiders/crawlers to include or exclude a page?

Answer: Both are important to use, they are used for different purposes. Robots file is used to include or exclude pages or root files from spider's index. While, Meta tags are used analyse a website page that defines about it's niche & content within the page.

Are there any issues in using both the meta tags and the robots.txt?

Answer: Both should be implemented to sites so that search engine spiders/crawlers can index or de-index the site urls.

Read more here about working of a search engine spiders >>https://www.playbuzz.com/alexhuber10/how-search-and-spider-engines-work

Akin answered 23/7, 2019 at 11:7 Comment(0)
C
-1

You can have any one but if your website has plenty of web pages then robots.txt is easy and reduces time complexity

Carabin answered 20/8, 2013 at 7:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.