How do I check to see if urllib.urlretrieve(url, file_name)
has completed before allowing my program to advance to the next statement?
Take for example the following code snippet:
import traceback
import sys
import Image
from urllib import urlretrieve
try:
print "Downloading gif....."
urlretrieve(imgUrl, "tides.gif")
# Allow time for image to download/save:
time.sleep(5)
print "Gif Downloaded."
except:
print "Failed to Download new GIF"
raw_input('Press Enter to exit...')
sys.exit()
try:
print "Converting GIF to JPG...."
Image.open("tides.gif").convert('RGB').save("tides.jpg")
print "Image Converted"
except Exception, e:
print "Conversion FAIL:", sys.exc_info()[0]
traceback.print_exc()
pass
When the download of 'tides.gif' via urlretrieve(imgUrl, "tides.gif")
takes longer than time.sleep(seconds)
resulting in an empty or not-complete file, Image.open("tides.gif")
raises an IOError
(due to a tides.gif file of size 0 kB).
How can I check the status of urlretrieve(imgUrl, "tides.gif")
, allowing my program to advance only after the statement has been successfully completed?
requests
, but you needgevent
andgreenlet
– Deficiency