Stop Google from indexing [closed]
Asked Answered
M

9

77

Is there a way to stop Google from indexing a site?

Murky answered 23/12, 2008 at 23:29 Comment(7)
Google obeys the robots.txt file.Otolith
What is the robots.txt file?Murky
Added link to the wikipedia article on robots.txtOtolith
Google can still list you in search results regardless of robots.txtTrinidadtrinitarian
@Trinidadtrinitarian - the question was how to stop Google from indexing a site. Google will obey the robots.txt file and not index the portions of your site that you disallow.Otolith
@Otolith : while the most common process goes from Indexing to Listing, a site doesn’t have to be indexed to be listed. If a link points at a page, domain or wherever, that link will be followed. If the robots.txt on that domain prevents the search engine from indexing that page, it’ll still show the URL in the results if it can gather from other variables that it might be worth looking at.Swank
I voted to close this question because it is not a programming question and it is off-topic on Stack Overflow. Non-programming questions about your website should be asked on Webmasters. In this case the question has already been asked and answered there: Block Google (and other) from indexing a domainVenitavenite
S
111

robots.txt

User-agent: *
Disallow: /

this will block all search bots from indexing.

for more info see: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40360

Subtorrid answered 23/12, 2008 at 23:32 Comment(6)
Actually, to be precise, this will block all legitimate bots from crawling the site. Malicious ones will still attempt to do so, just in case that matters.Bield
That is correct however if the "spider" does not check robots.txt then it is likely malicious, which from my experience means that they will also spoof the user-agent which makes it ridiculously hard to stop.Subtorrid
One can also use the robots meta tag. I wrote how here: ligatures.net/content/expertise/… The benefit of this method is that it gives finer control (i.e., per page) when necessary.Evieevil
This answer will result in google still indexing the page. When I tried it and searched google, my site still showed up but with "A description for this result is not available because of this site's robots.txt". Please see Carlos's answer.Unloosen
@JustinJStark My understanding is that this is only true if the page was previously indexed. If a site uses this from day 1, the pages will never make it to Google's (or other legitimate search providers) index.Kwangtung
Beware ! Actually robots.txt file will prevent search engine from crawling your site, but not from indexing it... Indexing is the process of downloading a site or a page’s content to the server of the search engine, thereby adding it to it’s “index”. @Upright 's answer is much more accurate and complete.Swank
U
98

Remember that preventing Google from crawling doesn't mean you can keep your content private.

My answer is based on few sources: https://developers.google.com/webmasters/control-crawl-index/docs/getting_started https://sites.google.com/site/webmasterhelpforum/en/faq--crawling--indexing---ranking

robots.txt file controls crawling, but not indexing! Those two are completely different actions, performed separately. Some pages may be crawled but not indexed, and some may even be indexed but never crawled. The link to non-crawled page may exist on other websites, which will make Google indexer to follow it, and try to index.

Question is about indexing which is gathering data about the page so it may be available through search results. It can be blocked adding meta tag:

<meta name="robots" content="noindex" />

or adding HTTP header to response:

X-Robots-Tag: noindex

If the question is about crawling then of course you could create robots.txt file and put following lines:

User-agent: *
Disallow: /

Crawling is an action performed to gather information about the structure of one specific website. E.g. you've added the site through Google Webmaster Tools. Crawler will take it on account, and visit your website, searching for robots.txt. If it doesn't find any, then it will assume that it can crawl anything (it's very important to have sitemap.xml file as well, to help in this operation, and specify priorities and define change frequencies). If it finds the file, it will follow the rules. After successful crawling it will at some point run indexing for crawled pages, but you can't tell when...

Important: this all means that your page can still be shown in Google search results regardless of robots.txt.

Upright answered 11/2, 2014 at 0:33 Comment(1)
What does Google do if you allow indexing with X-Robots-Tag but have a noindex metatag? (UPDATE: found my answer here: https://mcmap.net/q/266391/-precedence-of-x-robots-tag-header-vs-robots-meta-tag/1429450 )Outstand
C
8

There are several way to stop crawlers including Google to stop crawling and indexing your website.

At server level through header

Header set X-Robots-Tag "noindex, nofollow"

At root domain level through robots.txt file

User-agent: *
Disallow: /

At page level through robots meta tag

<meta name="robots" content="nofollow" />

However, I must say if your website has outdated and not existing pages/urls then you should wait for sometime Google will automatically deindex those urls in next crawl - read https://support.google.com/webmasters/answer/1663419?hl=en

Car answered 4/9, 2018 at 13:6 Comment(0)
G
1

You can disable this server wide by adding the below setting in globally in apache conf or the same parameters can be used in vhost for disabling it for particular vhost only.

Header set X-Robots-Tag "noindex, nofollow"

Once this is done you can test it by verifying apache headers returned.

curl -I staging.mywebsite.com HTTP/1.1 302 Found Date: Sat, 26 Nov 2016 22:36:33 GMT Server: Apache/2.4.18 (Ubuntu) Location: /pages/ X-Robots-Tag: noindex, nofollow Content-Type: text/html; charset=UTF-8

Giana answered 26/11, 2016 at 22:42 Comment(0)
V
0

Bear in mind that microsoft's crawler for Bing, despite their claim to obey robots.txt, does not always do so.

Our server stats indicate that they have a number of IP's that run crawlers that do not obey robots.txt as well as a number of ones that do.

Vespucci answered 21/9, 2011 at 16:33 Comment(0)
H
0

I use a simple aspx page to relays results from google to my browser using a fake 'Pref' cookie that gets 100 results at a time and i didn't want google to see this relay page so i check the IP address and if it starts with 66.249 then i simply do a redirect.

Click my name if you value privacy and would like a copy.

another trick i use is to have some javascript that calls a page to set a flag in session because most (NOT ALL) web-bots don't execute the javascript so you know it's a brower with javascript turned off or is a more than likly a bot.

Hjerpe answered 22/9, 2011 at 15:39 Comment(0)
T
0

Also you can add the meta robots in this way:

<head>
<title>...</title>
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
</head>

And another extra layer is to modify .htaccess, but you need to check it deeply.

Throat answered 2/11, 2012 at 21:36 Comment(0)
A
0

use a nofollow meta tag:

<meta name="robots" content="nofollow" />

To specify nofollow at the link level, add the attribute rel with the value nofollow to the link:

<a href="example.html" rel="nofollow" />
Askew answered 27/3, 2013 at 11:2 Comment(0)
W
0

Is there a way to stop Google from indexing a site?

To stop Google from crawling simply add the following meta tag to the head of every page:

<meta name="googlebot" content="noindex, nofollow">
Wiesbaden answered 20/11, 2017 at 8:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.