How to properly pickle sklearn pipeline when using custom transformer
Asked Answered
S

8

39

I am trying to pickle a sklearn machine-learning model, and load it in another project. The model is wrapped in pipeline that does feature encoding, scaling etc. The problem starts when i want to use self-written transformers in the pipeline for more advanced tasks.

Let's say I have 2 projects:

  • train_project: it has the custom transformers in src.feature_extraction.transformers.py
  • use_project: it has other things in src, or has no src catalog at all

If in "train_project" I save the pipeline with joblib.dump(), and then in "use_project" i load it with joblib.load() it will not find something such as "src.feature_extraction.transformers" and throw exception:

ModuleNotFoundError: No module named 'src.feature_extraction'

I should also add that my intention from the beginning was to simplify usage of the model, so programist can load the model as any other model, pass very simple, human readable features, and all "magic" preprocessing of features for actual model (e.g. gradient boosting) is happening inside.

I thought of creating /dependencies/xxx_model/ catalog in root of both projects, and store all needed classes and functions in there (copy code from "train_project" to "use_project"), so structure of projects is equal and transformers can be loaded. I find this solution extremely inelegant, because it would force the structure of any project where the model would be used.

I thought of just recreating the pipeline and all transformers inside "use_project" and somehow loading fitted values of transformers from "train_project".

The best possible solution would be if dumped file contained all needed info and needed no dependencies, and I am honestly shocked that sklearn.Pipelines seem to not have that possibility - what's the point of fitting a pipeline if i can not load fitted object later? Yes it would work if i used only sklearn classes, and not create custom ones, but non-custom ones do not have all needed functionality.

Example code:

train_project

src.feature_extraction.transformers.py

from sklearn.pipeline import TransformerMixin
class FilterOutBigValuesTransformer(TransformerMixin):
    def __init__(self):
        pass

    def fit(self, X, y=None):
        self.biggest_value = X.c1.max()
        return self

    def transform(self, X):
        return X.loc[X.c1 <= self.biggest_value]

train_project

main.py

from sklearn.externals import joblib
from sklearn.preprocessing import MinMaxScaler
from src.feature_extraction.transformers import FilterOutBigValuesTransformer

pipeline = Pipeline([
    ('filter', FilterOutBigValuesTransformer()),
    ('encode', MinMaxScaler()),
])
X=load_some_pandas_dataframe()
pipeline.fit(X)
joblib.dump(pipeline, 'path.x')

test_project

main.py

from sklearn.externals import joblib

pipeline = joblib.load('path.x')

The expected result is pipeline loaded correctly with transform method possible to use.

Actual result is exception when loading the file.

Signature answered 11/9, 2019 at 11:36 Comment(1)
I have the same question, I will share what I've tried so far. interchanging joblib, pickle . re-importing the my custom featureUnion subclass. Please post here if you figure a way out.Muskeg
C
17

I found a pretty straightforward solution. Assuming you are using Jupyter notebooks for training:

  1. Create a .py file where the custom transformer is defined and import it to the Jupyter notebook.

This is the file custom_transformer.py

from sklearn.pipeline import TransformerMixin

class FilterOutBigValuesTransformer(TransformerMixin):
    def __init__(self):
        pass

    def fit(self, X, y=None):
        self.biggest_value = X.c1.max()
        return self

    def transform(self, X):
        return X.loc[X.c1 <= self.biggest_value]
  1. Train your model importing this class from the .py file and save it using joblib.
import joblib
from custom_transformer import FilterOutBigValuesTransformer
from sklearn.externals import joblib
from sklearn.preprocessing import MinMaxScaler

pipeline = Pipeline([
    ('filter', FilterOutBigValuesTransformer()),
    ('encode', MinMaxScaler()),
])

X=load_some_pandas_dataframe()
pipeline.fit(X)

joblib.dump(pipeline, 'pipeline.pkl')
  1. When loading the .pkl file in a different python script, you will have to import the .py file in order to make it work:
import joblib
from utils import custom_transformer # decided to save it in a utils directory

pipeline = joblib.load('pipeline.pkl')

Conchita answered 20/5, 2020 at 18:37 Comment(0)
P
5

Credit to Ture Friese for mentioning cloudpickle >=2.0.0, but here's an example for your use case.

import cloudpickle

cloudpickle.register_pickle_by_value(FilterOutBigValuesTransformer)
with open('./pipeline.cloudpkl', mode='wb') as file:
    pipeline.dump(
        obj=Pipe
        , file=file
    )

register_pickle_by_value() is the key as it will ensure your custom module (src.feature_extraction.transformers) is also included when serializing your primary object (pipeline). However, this is not built for recursive module dependence, e.g. if FilterOutBigValuesTransformer also contains another import statement

Predicable answered 11/8, 2022 at 17:26 Comment(0)
B
4

Apparently this problem raises when you split definitions and saving code part in two different files. So I have found this workaround that has worked for me.

It consists in these steps:

Guess we have your 2 projects/repositories : train_project and use_project

train_project:

  • On your train_project create a jupyter notebook or .py

  • On that file lets define every Custom transformer in a class, and import all other tools needed from sklearn to design the pipelines. Then lets write the saving code to pickle just inside the same file.(Don't create an external .py file src.feature_extraction.transformers to define your customtransformers).

  • Then fit and dumb your pipeline by running that file.

On use_project:

  • Create a customthings.py file with all the functions and transformers defined inside.
  • Create another file_where_load.py where you wish load the pickle. Inside, make sure you have imported all the definitions from customthings.py . Ensure that functions and classes have the same name than the ones you've used on train_project.

I hope it works for everyone with same problem

Bluefarb answered 11/3, 2022 at 16:15 Comment(0)
S
2

I have created a workaround solution. I do not consider it a complete answer to my question, but non the less it let me move on from my problem.

Conditions for the workaround to work:

I. Pipeline needs to have only 2 kinds of transformers:

  1. sklearn transformers
  2. custom transformers, but with only attributes of types:
    • number
    • string
    • list
    • dict

or any combination of those e.g. list of dicts with strings and numbers. Generally important thing is that attributes are json serializable.

II. names of pipeline steps need to be unique (even if there is pipeline nesting)


In short model would be stored as a catalog with joblib dumped files, a json file for custom transformers, and a json file with other info about model.

I have created a function that goes through steps of a pipeline and checks __module__ attribute of transformer.

If it finds sklearn in it it then it runs joblib.dump function under a name specified in steps (first element of step tuple), to some selected model catalog.

Otherwise (no sklearn in __module__) it adds __dict__ of transformer to result_dict under a key equal to name specified in steps. At the end I json.dump the result_dict to model catalog under name result_dict.json.

If there is a need to go into some transformer, because e.g. there is a Pipeline inside a pipeline, you can probably run this function recursively by adding some rules to the beginning of the function, but it becomes important to have always unique steps/transformers names even between main pipeline and subpipelines.

If there are other information needed for creation of model pipeline then save them in model_info.json.


Then if you want to load the model for usage: You need to create (without fitting) the same pipeline in target project. If pipeline creation is somewhat dynamic, and you need information from source project, then load it from model_info.json.

You can copy function used for serialization and:

  • replace all joblib.dump with joblib.load statements, assign __dict__ from loaded object to __dict__ of object already in pipeline
  • replace all places where you added __dict__ to result_dict with assignment of appropriate value from result_dict to object __dict__ (remember to load result_dict from file beforehand)

After running this modified function, previously unfitted pipeline should have all transformer attributes that were effect of fitting loaded, and pipeline as a whole ready to predict.

The main things I do not like about this solution is that it needs pipeline code inside target project, and needs all attrs of custom transformers to be json serializable, but I leave it here for other people that stumble on a similar problem, maybe somebody comes up with something better.

Signature answered 3/10, 2019 at 8:43 Comment(0)
R
2

Have you tried using cloud pickle? https://github.com/cloudpipe/cloudpickle

Roundabout answered 28/1, 2020 at 11:46 Comment(0)
S
2

I was similarly surprised when I came across the same problem some time ago. Yet there are multiple ways to address this.

Best practice solution:

As others have mentioned, the best practice solution is to move all dependencies of your pipeline into a separate Python package and define that package as a dependency of your model environment.

The environment then has to be recreated whenever the model is deployed. In simple cases this can be done manually e.g. via virtualenv or Poetry. But model stores and versioning frameworks (MLflow being one example) typically provide a way to define the required Python environment (e.g. via conda.yaml). They often can automatically recreate the environment at deployment time.

Solution by putting code into main:

In fact, class and function declearations can be serialized, but only declarations in __main__ actually get serialized. __main__ is the entry point of the script, the file that is run. So if all the custom code and all of its dependencies are in that file, then custom objects can later be loaded in Python environments that do not include the code. This kind of solves the problem, but who wants to have all that code in __main__? (Note that this property also applies to cloudpickle)

Solution by "mainifying":

There is one other way which is to "mainify" the classes or function objects before saving. I came across that same problem some time ago and have written a function that does that. It essentially redefines an existing object's code in __main__. Its application is simple: Pass object to function, then serialize the object, voilà, it can be loaded anywhere. Like so:

# ------ In file1.py: ------    
    
class Foo():
    pass

# ------ In file2.py: ------
from file1 import Foo    

foo = Foo()
foo = mainify(foo)

import dill
    
with open('path/file.dill', 'wb') as f
   dill.dump(foo, f)

I post the function code below. Note that I have tested this with dill, but I think it should work with pickle as well.

Also note that the original idea is not mine, but came from a blog post that I cannot find right now. I will add the reference/acknowledgement when I find it. Edit: Blog post by Oege Dijk by which my code was inspired.

def mainify(obj, warn_if_exist=True):
    ''' If obj is not defined in __main__ then redefine it in main. Allows dill 
    to serialize custom classes and functions such that they can later be loaded
    without them being declared in the load environment.

    Parameters
    ---------
    obj           : Object to mainify (function or class instance)
    warn_if_exist : Bool, default True. Throw exception if function (or class) of
                    same name as the mainified function (or same name as mainified
                    object's __class__) was already defined in __main__. If False
                    don't throw exception and instead use what was defined in
                    __main__. See Limitations.
    Limitations
    -----------
    Assumes `obj` is either a function or an instance of a class.                
    ''' 
    if obj.__module__ != '__main__':                                                
        
        import __main__       
        is_func = True if isinstance(obj, types.FunctionType) else False                                            
        
        # Check if obj with same name is already defined in __main__ (for funcs)
        # or if class with same name as obj's class is already defined in __main__.
        # If so, simply return the func with same name from __main__ (for funcs)
        # or assign the class of same name to obj and return the modified obj        
        if is_func:
            on = obj.__name__
            if on in __main__.__dict__.keys():
                if warn_if_exist:
                    raise RuntimeError(f'Function with __name__ `{on}` already defined in __main__')
                return __main__.__dict__[on]
        else:
            ocn = obj.__class__.__name__
            if ocn  in __main__.__dict__.keys():
                if warn_if_exist:
                    raise RuntimeError(f'Class with obj.__class__.__name__ `{ocn}` already defined in __main__')
                obj.__class__ = __main__.__dict__[ocn]                
                return obj
                                
        # Get source code and compile
        source = inspect.getsource(obj if is_func else obj.__class__)
        compiled = compile(source, '<string>', 'exec')                    
        # "declare" in __main__, keeping track which key of __main__ dict is the new one        
        pre = list(__main__.__dict__.keys()) 
        exec(compiled, __main__.__dict__)
        post = list(__main__.__dict__.keys())                        
        new_in_main = list(set(post) - set(pre))[0]
        
        # for function return mainified version, else assign new class to obj and return object
        if is_func:
            obj = __main__.__dict__[new_in_main]            
        else:            
            obj.__class__ = __main__.__dict__[new_in_main]
                
    return obj
Supramolecular answered 22/6, 2022 at 14:5 Comment(0)
D
1

Based on my research it seems that the best solution is to create a Python package that includes your trained pipeline and all files.

Then you can pip install it in the project where you want to use it and import the pipeline with from <package name> import <pipeline name>.

Dihedron answered 4/8, 2020 at 13:10 Comment(0)
F
-1

Calling the location of the transform.py file with sys.path.append may resolve the issue.

import sys
sys.path.append("src/feature_extraction/transformers")
Frisk answered 1/5, 2022 at 16:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.