This simple Python 3 script:
import urllib.request
host = "scholar.google.com"
link = "/scholar.bib?q=info:K7uZdMSvdQ0J:scholar.google.com/&output=citation&hl=en&as_sdt=1,14&ct=citation&cd=0"
url = "http://" + host + link
filename = "cite0.bib"
print(url)
urllib.request.urlretrieve(url, filename)
raises this exception:
Traceback (most recent call last):
File "C:\Users\ricardo\Desktop\Google-Scholar\BibTex\test2.py", line 8, in <module>
urllib.request.urlretrieve(url, filename)
File "C:\Python32\lib\urllib\request.py", line 150, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "C:\Python32\lib\urllib\request.py", line 1597, in retrieve
block = fp.read(bs)
ValueError: read of closed file
I thought this might be a temporary problem, so I added some simple exception handling like so:
import random
import time
import urllib.request
host = "scholar.google.com"
link = "/scholar.bib?q=info:K7uZdMSvdQ0J:scholar.google.com/&output=citation&hl=en&as_sdt=1,14&ct=citation&cd=0"
url = "http://" + host + link
filename = "cite0.bib"
print(url)
while True:
try:
print("Downloading...")
time.sleep(random.randint(0, 5))
urllib.request.urlretrieve(url, filename)
break
except ValueError:
pass
but this just prints Downloading...
ad infinitum.
http://scholar.google.com/robots.txt
you can see that Google forbids automated downloads of this page. And if you try usingwget
you will get a403 Forbidden
error. I suspect this is also happening to your script. – Hyperpyrexia