I am trying to download some PDF's automatically on a site (http://bibliotecadigitalhispanica.bne.es) using Python.
I've tried using the urllib/urllib2/mechanize modules (which I have been using for other sites: this includes the standard functions like urlopen, urlretrieve, etc.), but here, the links have JavaScript embedded in their href attributes that does some processing and opens up the PDF, which these modules don't seem to be able to handle, at least from what I have read here. For example, when I do the following:
request = mechanize.Request('the example url below')
response = mechanize.urlopen(request)
it just get back the containing HTML page - I just can't seem to extract the PDF (there are no links to it inside that page, either).
I know by looking through the headers in a real browser (using the LiveHTTPHeaders extension in Firefox) that a lot of HTTP requests are made and eventually the PDF is returned (and displayed in the browser). I would like to be able to intercept this and download it. Concretely, I get a series of 302 and 304 responses, eventually leading to the PDF.
Here is an example of a link attribute that I am crawling: href='javascript:open_window_delivery("http://bibliotecadigitalhispanica.bne.es:80/verylonglinktoaccess");'
It seems that if I execute this JavaScript embedded in the href attribute, I can eventually reach the PDF document itself. I've tried with selenium, but it is a tad confusing - I'm not quite sure how to use it upon reading its documentation. Can someone suggest a way (either through a module I haven't tried or through one that I have) that I can do this?
Thank you very much for any help with this.
P.S.: in case you would like to see what I am trying to replicate, I am trying to access the PDF links mentioned above on the following page (the ones with the PDF icons):): http://bibliotecadigitalhispanica.bne.es/R/9424CFL1MDQGLGBB98QSV1HFAD2APYDME4GQKCBSLXFX154L4G-01075?func=collections-result&collection_id=1356