spacy with joblib library generates _pickle.PicklingError: Could not pickle the task to send it to the workers
Asked Answered
L

7

26

I have a large list of sentences (~7 millions), and I want to extract the nouns from them.

I used joblib library to parallelize the extracting process, like in the following:

import spacy
from tqdm import tqdm
from joblib import Parallel, delayed
nlp = spacy.load('en_core_web_sm')

class nouns:

    def get_nouns(self, text):
        doc = nlp(u"{}".format(text))
        return [token.text for token in doc if token.tag_ in ['NN', 'NNP', 'NNS', 'NNPS']]

    def parallelize(self, sentences):
        results = Parallel(n_jobs=1)(delayed(self.get_nouns)(sent) for sent in tqdm(sentences))
        return results

if __name__ == '__main__':
    sentences = ['we went to the school yesterday',
                 'The weather is really cold',
                 'Can we catch the dog?',
                 'How old are you John?',
                 'I like diving and swimming',
                 'Can the world become united?']
    obj = nouns()
    print(obj.parallelize(sentences))

when n_jobs in parallelize function is more than 1, I get this long error:

100%|██████████| 6/6 [00:00<00:00, 200.00it/s]
joblib.externals.loky.process_executor._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\queues.py", line 150, in _feed
    obj_ = dumps(obj, reducers=reducers)
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\reduction.py", line 243, in dumps
    dump(obj, buf, reducers=reducers, protocol=protocol)
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\reduction.py", line 236, in dump
    _LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 267, in dump
    return Pickler.dump(self, obj)
  File "C:\Python35\lib\pickle.py", line 408, in dump
    self.save(obj)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 841, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 770, in save_list
    self._batch_appends(obj)
  File "C:\Python35\lib\pickle.py", line 797, in _batch_appends
    save(tmp[0])
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 725, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 718, in save_instancemethod
    self.save_reduce(types.MethodType, (obj.__func__, obj.__self__), obj=obj)
  File "C:\Python35\lib\pickle.py", line 599, in save_reduce
    save(args)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 725, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 395, in save_function
    self.save_function_tuple(obj)
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 594, in save_function_tuple
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 841, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 599, in save_reduce
    save(args)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 740, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 740, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 495, in save
    rv = reduce(self.proto)
  File "stringsource", line 2, in preshed.maps.PreshMap.__reduce_cython__
TypeError: self.c_map cannot be converted to a Python object for pickling
"""Exception in thread QueueFeederThread:
Traceback (most recent call last):
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\queues.py", line 150, in _feed
    obj_ = dumps(obj, reducers=reducers)
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\reduction.py", line 243, in dumps
    dump(obj, buf, reducers=reducers, protocol=protocol)
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\reduction.py", line 236, in dump
    _LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 267, in dump
    return Pickler.dump(self, obj)
  File "C:\Python35\lib\pickle.py", line 408, in dump
    self.save(obj)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 841, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 770, in save_list
    self._batch_appends(obj)
  File "C:\Python35\lib\pickle.py", line 797, in _batch_appends
    save(tmp[0])
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 725, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 718, in save_instancemethod
    self.save_reduce(types.MethodType, (obj.__func__, obj.__self__), obj=obj)
  File "C:\Python35\lib\pickle.py", line 599, in save_reduce
    save(args)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 725, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 395, in save_function
    self.save_function_tuple(obj)
  File "C:\Python35\lib\site-packages\joblib\externals\cloudpickle\cloudpickle.py", line 594, in save_function_tuple
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 841, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Python35\lib\pickle.py", line 836, in _batch_setitems
    save(v)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 599, in save_reduce
    save(args)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 740, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python35\lib\pickle.py", line 623, in save_reduce
    save(state)
  File "C:\Python35\lib\pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python35\lib\pickle.py", line 740, in save_tuple
    save(element)
  File "C:\Python35\lib\pickle.py", line 495, in save
    rv = reduce(self.proto)
  File "stringsource", line 2, in preshed.maps.PreshMap.__reduce_cython__
TypeError: self.c_map cannot be converted to a Python object for pickling

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python35\lib\threading.py", line 914, in _bootstrap_inner
    self.run()
  File "C:\Python35\lib\threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Python35\lib\site-packages\joblib\externals\loky\backend\queues.py", line 175, in _feed
    onerror(e, obj)
  File "C:\Python35\lib\site-packages\joblib\externals\loky\process_executor.py", line 310, in _on_queue_feeder_error
    self.thread_wakeup.wakeup()
  File "C:\Python35\lib\site-packages\joblib\externals\loky\process_executor.py", line 155, in wakeup
    self._writer.send_bytes(b"")
  File "C:\Python35\lib\multiprocessing\connection.py", line 183, in send_bytes
    self._check_closed()
  File "C:\Python35\lib\multiprocessing\connection.py", line 136, in _check_closed
    raise OSError("handle is closed")
OSError: handle is closed



The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File ".../playground.py", line 43, in <module>
    print(obj.Paralize(sentences))
  File ".../playground.py", line 32, in Paralize
    results = Parallel(n_jobs=2)(delayed(self.get_nouns)(sent) for sent in tqdm(sentences))
  File "C:\Python35\lib\site-packages\joblib\parallel.py", line 934, in __call__
    self.retrieve()
  File "C:\Python35\lib\site-packages\joblib\parallel.py", line 833, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "C:\Python35\lib\site-packages\joblib\_parallel_backends.py", line 521, in wrap_future_result
    return future.result(timeout=timeout)
  File "C:\Python35\lib\concurrent\futures\_base.py", line 405, in result
    return self.__get_result()
  File "C:\Python35\lib\concurrent\futures\_base.py", line 357, in __get_result
    raise self._exception
_pickle.PicklingError: Could not pickle the task to send it to the workers.

What is the problem in my code?

Langue answered 4/7, 2019 at 8:48 Comment(2)
I added an alternative solution below to parallelize the mentioned Spacy process.Langue
If dill works to parallelize the code, then would not multiprocess(i.e. multiprocessing using dill) also be appropriate?Chemar
S
14

Q: What is the problem in my code?

Well, most probably the issue comes not from the code, but from the "hidden" processing, that appears, once n_jobs directs ( and joblib internally orchestrates ) to prepare that many exact copies of the main process, so as to let them work independently one of each other ( effectively thus escaping from GIL-locking and mapping the multiple process-flows onto physical hardware resources )

This step is responsible for making copies of all pythonic objects and was known to use Pickle for doing this. The Pickle module was known for its historical principal limitations on what can be pickled and what cannot.

The error message confirms this:

TypeError: self.c_map cannot be converted to a Python object for pickling

One may try a trick to supply Mike McKearns dill module instead of Pickle and test, if your "problematic" python objects will get pickled with this module without throwing this error.

dill has the same API signatures, so a pure import dill as pickle may help with leaving all the other code the same.

I had the same problems, with large models to get distributed into and back from multiple processes and the dill was a way to go. Also the performance has increased.

Bonus: dill allows to save / restore the full python interpreter state!

This was a cool side-effect of finding dill, once import dill as pickle was done, pickle.dump_session( <aFile> ) will save ones complete state-full copy of the python interpreter session. This can be restored, if needed ( post-crash restores, trained trained and optimised ML-model state-fully saved / restored, incremental learning ML-model state-fully saved and re-distributed for remote restores for the deployed user-bases, etc. )

Soble answered 4/7, 2019 at 10:22 Comment(3)
Thanks for the brief explanation. I tried to import dill as pickle, it really work, but the performance doesn't improve. I think I have to find another way to parallelize the process.Langue
where to add import dill as pickle statement? I tried adding it both before and after from joblib import Parallel, delayed but I kept getting the same error _pickle.PicklingErrorHeadrest
@user_007 Can you please let us know where you added the import? Was it in one of the joblib files?Trencher
D
23

Same issue. I solved by changing the backend from loky to threading in Parallel.

Denotation answered 4/8, 2021 at 18:38 Comment(5)
thank you for this comment, it helped me :)Votyak
is there any catch of using threading backend? performance penalty?Fulltime
Good question. I'm not a joblib expert, but I found this on sklearn (that uses joblib in the background): "‘loky’ is recommended to run functions that manipulate Python objects. ‘threading’ is a low-overhead alternative that is most efficient for functions that release the Global Interpreter Lock: e.g. I/O-bound code or CPU-bound code in a few calls to native code that explicitly releases the GIL."Denotation
```with parallel_backend('threading', n_jobs=2): Parallel()(delayed(sqrt)(i ** 2) for i in range(10))Insignia
@Fulltime Default behavior of <code>joblib</code> is to use <code>multiprocessing</code>, whereas <code>prefer="threading"</code> is a hint to use different threads: joblib.readthedocs.io/en/latest/…Iroquoian
T
20

In case if you are still not able to find the solution. I resolved the error by changing

Parallel(n_jobs=8)

to

Parallel(n_jobs=8, prefer="threads")
Trichomonad answered 3/11, 2022 at 6:47 Comment(0)
S
14

Q: What is the problem in my code?

Well, most probably the issue comes not from the code, but from the "hidden" processing, that appears, once n_jobs directs ( and joblib internally orchestrates ) to prepare that many exact copies of the main process, so as to let them work independently one of each other ( effectively thus escaping from GIL-locking and mapping the multiple process-flows onto physical hardware resources )

This step is responsible for making copies of all pythonic objects and was known to use Pickle for doing this. The Pickle module was known for its historical principal limitations on what can be pickled and what cannot.

The error message confirms this:

TypeError: self.c_map cannot be converted to a Python object for pickling

One may try a trick to supply Mike McKearns dill module instead of Pickle and test, if your "problematic" python objects will get pickled with this module without throwing this error.

dill has the same API signatures, so a pure import dill as pickle may help with leaving all the other code the same.

I had the same problems, with large models to get distributed into and back from multiple processes and the dill was a way to go. Also the performance has increased.

Bonus: dill allows to save / restore the full python interpreter state!

This was a cool side-effect of finding dill, once import dill as pickle was done, pickle.dump_session( <aFile> ) will save ones complete state-full copy of the python interpreter session. This can be restored, if needed ( post-crash restores, trained trained and optimised ML-model state-fully saved / restored, incremental learning ML-model state-fully saved and re-distributed for remote restores for the deployed user-bases, etc. )

Soble answered 4/7, 2019 at 10:22 Comment(3)
Thanks for the brief explanation. I tried to import dill as pickle, it really work, but the performance doesn't improve. I think I have to find another way to parallelize the process.Langue
where to add import dill as pickle statement? I tried adding it both before and after from joblib import Parallel, delayed but I kept getting the same error _pickle.PicklingErrorHeadrest
@user_007 Can you please let us know where you added the import? Was it in one of the joblib files?Trencher
L
1

An additional answer for my question:

I didn't find a solution for Joblib with Spacy, but instead to parallelize the process, I found that Spacy released something called Pipeline, where you can parse large number of documents with multi-threads.

I applied it with the same example above:

class nouns:

    def get_nouns(self, sentences):
        start = time.time()
        docs = nlp.pipe(sentences, n_threads=-1)
        result = [ ' '.join([token.text for token in doc if token.tag_ in ['NN', 'NNP', 'NNS', 'NNPS']]) for doc in docs]
        print('Time Elapsed {} ms'.format((time.time() - start) * 1000))
        print(result)


if __name__ == '__main__':
    sentences = ['we went to the school yesterday',
                 'The weather is really cold',
                 'Can we catch the dog?',
                 'How old are you John?',
                 'I like diving and swimming',
                 'Can the world become united?']
    obj = nouns()
    obj.get_nouns(sentences)
Langue answered 4/7, 2019 at 12:55 Comment(0)
Q
0

I had a similar problem with paralleling lemmatization, but with another library pymystem3.

from pymystem3 import Mystem
mystem = Mystem()
    
def preprocess_text(text):
   ...
   tokens = mystem.lemmatize(text)
   ...
   text = " ".join(tokens)
   return text

data_set = Parallel(n_jobs=-1)(delayed(preprocess_text)(article) for article in tqdm(articles))

The solution was to put initialization into function.

def preprocess_text(text):
   ...
   mystem = Mystem()
   tokens = mystem.lemmatize(text)
   ...
   text = " ".join(tokens)
   return text

I suspect you could try the same with nlp = spacy.load

Queri answered 9/2, 2021 at 17:10 Comment(0)
S
0

Just want to add my two cents. Use @staticmethod over your class method and spare the auto-injected self-object to prevent accidentally serializing a whole framework, as happened in my case (flask). As the framework does a lot of behind-the-scenes injections and blow-up the serialization dependencies.

Sharla answered 12/1, 2022 at 9:21 Comment(0)
O
0

Case study on how I fixed this:

Environment

  • Windows 10 x64
  • Python 3.9 or 3.10
  • joblib v1.1

Solution

# Examine the stack trace very carefully, you will see a line something like this:
TypeError: self.c_map cannot be converted to a Python object for pickling

This tells you exactly what variable cannot be serialized.

To fix, choose one option:

  1. Remove the variable from the function.
  2. Initialize the variable in the function from scratch.

In my case, I had to use a mixture of #1 and #2:

  1. Removed a variable that pointed to a class that had a handle to an open file (it cannot pickle anything with an open file).
  2. A class variable could not be pickled, so I initialized it again inside the function (which removes the need to serialize this class and pass it to the new process).

Example code

# [Bugfix]. Add next line to initialize this again to eliminate pmap pickle error. Stacktrace is your friend!
hdb = HivedbApi(base_dir=hivedb_base_dir, table_name=table_name, partition_type=PartitionType.HiveFilePerDate)
hdb.write(df_trades)

In the example in the OP, I would be hunting for some variable inside get_nouns() that could not be serialized, based on the stack trace (it will tell you exactly what variable it is stumbling over).

Solutions that did not work

Nothing else on this page worked, including changing backend to threading, pickler to dill, annotating the function, changing Python version, etc.

Bottom line:

Sometimes, nothing can serialize a class, especially if it has handles to open files. In this case, the only solution is to (a) remove these variables from the target function, or (b) reinitialize these variables inside the target function.

Outbrave answered 7/5, 2022 at 8:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.