Scraping/Parsing Google search results in Ruby
Asked Answered
G

6

4

Assume I have the entire HTML of a Google search results page. Does anyone know of any existing code (Ruby?) to scrape/parse the first page of Google search results? Ideally it would handle the Shopping Results and Video Results sections that can spring up anywhere.

If not, what's the best Ruby-based tool for screenscraping in general?

To clarify: I'm aware that it's difficult/impossible to get Google search results programmatically/API-wise AND simply CURLing results pages has a lot of issues. There's concensus on both of these points here on stackoverflow. My question is different.

Godchild answered 8/10, 2009 at 19:0 Comment(3)
I suggest to take a look at the Google rank checker ( google-rank-checker.squabbel.com ). It's not ruby, it's written in PHP. But it's open source and solves all tasks you need. You didn't seem to really be fixed at ruby, I've personally used PHP (console scripts) for many such projects (also in production environments). Anyway, even when you write in ruby you'll find the PHP code useful as some tasks when scraping Google can be quite tricky (delays, IPs, DOM parsing, send correct GET parameters, etc ).Yeah
This is an OLD question so anyone using it to justify using scraping instead of Google's API needs to rethink their logic. Use the API, that's what it's there for.Forgather
Use "Google Custom Search" instead.Forgather
D
9

This should be very simple thing, have a look at the "Screen Scraping with ScrAPI" screen cast by Ryan Bates. You still can do without scraping libraries, just stick to things like Nokogiri.


From Nokogiri's documentation:

require 'nokogiri'
require 'open-uri'

# Get a Nokogiri::HTML:Document for the page we’re interested in...

doc = Nokogiri::HTML(open('http://www.google.com/search?q=tenderlove'))

# Do funky things with it using Nokogiri::XML::Node methods...

####
# Search for nodes by css
doc.css('h3.r a.l').each do |link|
  puts link.content
end

####
# Search for nodes by xpath
doc.xpath('//h3/a[@class="l"]').each do |link|
  puts link.content
end

####
# Or mix and match.
doc.search('h3.r a.l', '//h3/a[@class="l"]').each do |link|
  puts link.content
end
Dunne answered 8/10, 2009 at 19:6 Comment(3)
And you can do link['href'] to get the href of the link ;).Weft
Ryan has two screencasts on scraping the one on ScrAPI mentioned above and one on Nokogiri which uses code more similar to the one in this answer.Korns
It seems that google changed layout of page and this code is not working anymore.Twoup
A
3

I'm unclear as to why you want to be screen scraping in the first place. Perhaps the REST search API would be more appropriate? It will return the results in JSON format, which will be much easier to parse, and save on bandwidth.

For example, if your search was 'foo bar', you could just send a GET request to http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=foo+bar and handle the response.

For more information, see "Google Search REST API" or Google's developer page.

Atencio answered 8/10, 2009 at 19:36 Comment(3)
It does not return the same results sadly. See: code.google.com/p/google-ajax-apis/issues/detail?id=43Reliable
"The Google Web Search API is no longer available"Twoup
Use "Google Custom Search" instead.Forgather
A
0

I would suggest HTTParty + Google's Ajax search API.

Acquah answered 8/5, 2010 at 10:17 Comment(1)
As written this is hardly an answer. Point to the appropriate pages, show why it's a usable answer with some code examples.Forgather
R
-1

You should be able to accomplish your goal easily with Mechanize.

If you already have the results, all you need is Hpricot or Nokogiri.

Reinsure answered 8/10, 2009 at 19:6 Comment(3)
You're welcome! And see my update: if you already have the results, Mechanize may be overkill.Reinsure
Hpricot is no longer supported so don't go there. Nokogiri is alive and well, and does support Hpricot's syntax, but don't use it, use the normal Nokogiri syntax as demonstrated in the cheet-sheet and tutorials.Forgather
Unfortunately, because Google uses DHTML for more and more of the page, scraping is more difficult than it used to be. Instead use "Google Custom Search".Forgather
O
-1

I don't know Ruby specific code but this google scraper could help you. That's an online tool demo that works scraping and parsing Google results. The most interesting thing is the article there with the explanation of the parsing process in PHP but it's applicable to Ruby and any other programming language.

Odellodella answered 16/9, 2011 at 17:54 Comment(1)
At this moment it just displays an endless list of CAPTCHAs to solve.Twoup
S
-1

Scrapping has became harder and harder as Google keep changing while expanding how the results are structured (Rich snippets, knowledge graph, direct answer, etc.), we built a service that handle part of this complexity and we do have a Ruby library. It's pretty straightforward to use:

query = GoogleSearchResults.new q: "coffee"

# Parsed Google results into a Ruby hash
hash_results = query.get_hash
Sever answered 22/12, 2017 at 19:38 Comment(2)
This seems to require that you pay Google for a SERP API key.Offutt
Use "Google Custom Search" instead.Forgather

© 2022 - 2024 — McMap. All rights reserved.