I am trying to write my first Python script, and with lots of Googling, I think that I am just about done. However, I will need some help getting myself across the finish line.
I need to write a script that logs onto a cookie-enabled site, scrape a bunch of links, and then spawn a few processes to download the files. I have the program running in single-threaded, so I know that the code works. But, when I tried to create a pool of download workers, I ran into a wall.
#manager.py
import Fetch # the module name where worker lives
from multiprocessing import pool
def FetchReports(links,Username,Password,VendorID):
pool = multiprocessing.Pool(processes=4, initializer=Fetch._ProcessStart, initargs=(SiteBase,DataPath,Username,Password,VendorID,))
pool.map(Fetch.DownloadJob,links)
pool.close()
pool.join()
#worker.py
import mechanize
import atexit
def _ProcessStart(_SiteBase,_DataPath,User,Password,VendorID):
Login(User,Password)
global SiteBase
SiteBase = _SiteBase
global DataPath
DataPath = _DataPath
atexit.register(Logout)
def DownloadJob(link):
mechanize.urlretrieve(mechanize.urljoin(SiteBase, link),filename=DataPath+'\\'+filename,data=data)
return True
In this revision, the code fails because the cookies have not been transferred to the worker for urlretrieve to use. No problem, I was able to use mechanize's .cookiejar class to save the cookies in the manager, and pass them to the worker.
#worker.py
import mechanize
import atexit
from multiprocessing import current_process
def _ProcessStart(_SiteBase,_DataPath,User,Password,VendorID):
global cookies
cookies = mechanize.LWPCookieJar()
opener = mechanize.build_opener(mechanize.HTTPCookieProcessor(cookies))
Login(User,Password,opener) # note I pass the opener to Login so it can catch the cookies.
global SiteBase
SiteBase = _SiteBase
global DataPath
DataPath = _DataPath
cookies.save(DataPath+'\\'+current_process().name+'cookies.txt',True,True)
atexit.register(Logout)
def DownloadJob(link):
cj = mechanize.LWPCookieJar()
cj.revert(filename=DataPath+'\\'+current_process().name+'cookies.txt', ignore_discard=True, ignore_expires=True)
opener = mechanize.build_opener(mechanize.HTTPCookieProcessor(cj))
file = open(DataPath+'\\'+filename, "wb")
file.write(opener.open(mechanize.urljoin(SiteBase, link)).read())
file.close
But, THAT fails because opener (I think) wants to move the binary file back to the manager for processing, and I get an "unable to pickle object" error message, referring to the webpage it's trying to read to the file.
The obvious solution is to read the cookies in from the cookie jar and manually add them to the header when making the urlretrieve request, but I am trying to avoid that, and that is why I am fishing for suggestions.