TSFRESH library for python is taking way too long to process
Asked Answered
D

4

15

I came across the TSfresh library as a way to featurize time series data. The documentation is great, and it seems like the perfect fit for the project I am working on.

I wanted to implement the following code that was shared in the quick start section of the TFresh documentation. And it seems simple enough.

from tsfresh import extract_relevant_features
feature_filtered_direct=extract_relevant_features(result,y,column_id=0,column_sort=1)

My data included 400 000 rows of sensor data, with 6 sensors each for 15 different id's. I started running the code, and 17 hours later it still had not finished. I figured this might be too large of a data set to run through the relevant feature extractor, so I trimmed it down to 3000, and then further down to 300. None of these actions made the code run under an hour, and I just ended up shutting it down after an hour or so of waiting. I tried the standard feature extractor as well

extracted_features = extract_features(timeseries, column_id="id", column_sort="time")

Along with trying the example dataset that TSfresh presents on their quick start section. Which includes a dataset that is very similar to my orginal data, with about the same amount of data points as I reduced to.

Does anybody have any experience with this code? How would you go about making it work faster? I'm using Anaconda for python 2.7.

Update It seems to be related to multiprocessing. Because I am on windows, using the multiprocess code requires to be protected by

if __name__ == "__main__":
    main()

Once I added

if __name__ == "__main__":

    extracted_features = extract_features(timeseries, column_id="id", column_sort="time")

To my code, the example data worked. I'm still having some issues with running the extract_relevant_features function and running the extract features module on my own data set. It seems as though it continues to run slowly. I have a feeling its related to the multiprocess freeze as well, but without any errors popping up its impossible to tell. Its taking me about 30 minutes to run to extract features on less than 1% of my dataset.

Dayak answered 14/12, 2016 at 16:56 Comment(2)
When i run the script out of console I get some interesting error messages that don't seem to show up in Anaconda. These error messages are on constant loop, which may explain why the function is taking so long to execute (infinite). The RuntineError produced is related to freeze_support() Attempt to start a new process before bootstrapping is finished. Not sure what any of that meansDayak
I am getting the same RuntimeError about starting a new process before the current process has finished its bootstrapping phase when I run it in Windows command prompt. I did not get this error in Ubuntu on Windows.Brockington
Y
9

which version of tsfresh did you use? Which OS?

We are aware of the high computational costs of some feature calculators. There is less we can do about it. In the future we will implement some tricks like caching to increase the efficiency of tsfresh further.

Have you tried calculating only the basic features by using the MinimalFeatureExtractionSettings? It will only contain basic features such as Max, Min, Median and so on but should run way, way faster.

 from tsfresh.feature_extraction import MinimalFeatureExtractionSettings
 extracted_features = extract_features(timeseries, column_id="id", column_sort="time", feature_extraction_settings = MinimalFeatureExtractionSettings())

Also it is probably a good idea to install the latest version from the repo by pip install git+https://github.com/blue-yonder/tsfresh. We are actively developing it and the master should contain the newest and freshest version ;).

Yogi answered 18/12, 2016 at 19:26 Comment(3)
I'm currently using version 0.3.0 on Windows 10. I just ran the extract features module on about 197 000 rows of data, and it took about 830. Its the feature selector module that seems to take even longer when running. The full compliment of features is extremely useful in my analysis. To scale this operation to millions of rows would take up quite a bit of computing power.Dayak
We just released version 0.4.0, it contains a ReasonableFeatureExtraction settings object that will extract all but two entropy features. With this I am able to process 100.000 time series of length 1000, so 100 million rows in around 3 hours on a i6800k (6 cores at 4.3 Ghz). Further I use parallelization="per_sample"Yogi
As of version 11.2 it looks like ReasonableFeatureExtraction has been aptly renamed to EfficientFCParametersTharpe
P
8

Syntax has changed slightly (see docs), the current approach would be:

from tsfresh.feature_extraction import EfficientFCParameters, MinimalFCParameters
extract_features(timeseries, column_id="id", column_sort="time", default_fc_parameters=MinimalFCParameters())

Or

extract_features(timeseries, column_id="id", column_sort="time", default_fc_parameters=EfficientFCParameters())
Picoline answered 8/8, 2019 at 12:42 Comment(0)
T
0

Since version 0.15.0 we have improved our bindings for Apache Spark and dask. It is now possible to use the tsfresh feature extraction directly in your usual dask or Spark computation graph.

You can find the bindings in tsfresh.convenience.bindings with the documentation here. For example for dask, it would look something like this (assuming df is a dask.DataFrame, for example the robot failure dataframe from our example)

df = df.melt(id_vars=["id", "time"],
             value_vars=["F_x", "F_y", "F_z", "T_x", "T_y", "T_z"],
             var_name="kind", value_name="value")
df_grouped = df.groupby(["id", "kind"])
features = dask_feature_extraction_on_chunk(df_grouped, column_id="id", column_kind="kind",
                                            column_sort="time", column_value="value",
                                            default_fc_parameters=EfficientFCParameters())
                                            # or any other parameter set

Using either dask or Spark (or anything alike) might help you with very large data - both for memory as well as speed (as you can distribute the work over multiple machines). Of course, we still support the usual distributors (docu) as before.

Additional to that, it is also possible to run tsfresh together with a task orchestration system, such as luigi. You can create a task to * read in the data for only one id and kind * extract the features * write out the result to disk and let luigi handle all the rest. You may find a possible implementation of this here on my blog.

Tint answered 14/4, 2020 at 19:46 Comment(0)
Z
0

I've found, at least on a multicore machine, that a better way to distribute extract_features calculation over independent subgroups (identified by the column_id value) is through joblib.Parallel with the Loky backend.

For example, you define your features extraction function on a single value of columnd_id and you apply it

from joblib import Parallel, delayed
def map_extract_features(df):
    return extract_features(
        timeseries_container=df,
        default_fc_parameters=settings,
        column_id="ID",
        column_sort="DATE",
        n_jobs=1,
        disable_progressbar=True
    ).reset_index().rename({"index":"ID_CONTO"}, axis=1)

out = Parallel(n_jobs=cpu_count()-1)(
    delayed(map_extract_features)(
        my_dataframe[my_dataframe["ID"]==id]
    ) for id in tqdm(my_dataframe["ID"].unique())
)

This method takes way less memory than specifying column_id directly in the extract_features function.

Zellers answered 6/10, 2022 at 8:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.