Possibility to apply online algorithms on big data files with sklearn?
Asked Answered
H

3

23

I would like to apply fast online dimensionality reduction techniques such as (online/mini-batch) Dictionary Learning on big text corpora. My input data naturally do not fit in the memory (this is why i want to use an online algorithm) so i am looking for an implementation that can iterate over a file rather than loading everything in memory. Is it possible to do this with sklearn ? are there alternatives ?

Thanks register

Hernardo answered 17/9, 2012 at 13:18 Comment(0)
L
7

Since Sklearn 0.13 there is indeed an implementation of the HashingVectorizer.

EDIT: Here is a full-fledged example of such an application

Basically, this example demonstrates that you can learn (e.g. classify text) on data that cannot fit in the computer's main memory (but rather on disk / network / ...).

Lantha answered 22/5, 2013 at 8:41 Comment(2)
I should add, from my personal experience, that even using HashingVectorizer (with n_features=2**18) takes 20 minutes to process just a 55 MB file on a 8-core 30 GB machine.Furring
The number of cores is irrelevant here (unless you parallelize ?), as is the RAM (the point is in out-of-core learning : the data does not live in RAM). I'm puzzled though as with the linked code I'm vectorizing + training a model on 2.5GB of text in 12 minutes on a high end server. Do you use the linked code or do you have a particular preprocessing ?Lantha
S
25

For some algorithms supporting partial_fit, it would be possible to write an outer loop in a script to do out-of-core, large scale text classification. However there are some missing elements: a dataset reader that iterates over the data on the disk as folders of flat files or a SQL database server, or NoSQL store or a Solr index with stored fields for instance. We also lack an online text vectorizer.

Here is a sample integration template to explain how it would fit together.

import numpy as np
from sklearn.linear_model import Perceptron

from mymodule import SomeTextDocumentVectorizer
from mymodule import DataSetReader

dataset_reader = DataSetReader('/path/to/raw/data')

expected_classes = dataset_reader.get_all_classes()  # need to know the possible classes ahead of time

feature_extractor = SomeTextDocumentVectorizer()
classifier = Perceptron()

dataset_reader = DataSetReader('/path/to/raw/data')

for i, (documents, labels) in enumerate(dataset_reader.iter_chunks()):

    vectors = feature_extractor.transform(documents)
    classifier.partial_fit(vectors, labels, classes=expected_classes)

    if i % 100 == 0:
        # dump model to be able to monitor quality and later analyse convergence externally
        joblib.dump(classifier, 'model_%04d.pkl' % i)

The dataset reader class is application specific and will probably never make it into scikit-learn (except maybe for a folder of flat text files or CSV files that would not require to add a new dependency to the library).

The text vectorizer part is more problematic. The current vectorizer does not have a partial_fit method because of the way we build the in-memory vocabulary (a python dict that is trimmed depending on max_df and min_df). We could maybe build one using an external store and drop the max_df and min_df features.

Alternatively we could build an HashingTextVectorizer that would use the hashing trick to drop the dictionary requirements. None of those exist at the moment (although we already have some building blocks such as a murmurhash wrapper and a pull request for hashing features).

In the mean time I would advise you to have a look at Vowpal Wabbit and maybe those python bindings.

Edit: The sklearn.feature_extraction.FeatureHasher class has been merged into the master branch of scikit-learn and will be available in the next release (0.13). Have a look at the documentation on feature extraction.

Edit 2: 0.13 is now released with both FeatureHasher and HashingVectorizerthat can directly deal with text data.

Edit 3: there is now an example on out-of-core learning with the Reuters dataset in the official example gallery of the project.

Sero answered 17/9, 2012 at 14:0 Comment(2)
thanks for this detailed answer, i finally had a closer look at this hashing trick (implemented in vowpal wabbit), i'll test it :-)Hernardo
tested and approved! unbelievably fast and full of advanced options.Hernardo
L
7

Since Sklearn 0.13 there is indeed an implementation of the HashingVectorizer.

EDIT: Here is a full-fledged example of such an application

Basically, this example demonstrates that you can learn (e.g. classify text) on data that cannot fit in the computer's main memory (but rather on disk / network / ...).

Lantha answered 22/5, 2013 at 8:41 Comment(2)
I should add, from my personal experience, that even using HashingVectorizer (with n_features=2**18) takes 20 minutes to process just a 55 MB file on a 8-core 30 GB machine.Furring
The number of cores is irrelevant here (unless you parallelize ?), as is the RAM (the point is in out-of-core learning : the data does not live in RAM). I'm puzzled though as with the linked code I'm vectorizing + training a model on 2.5GB of text in 12 minutes on a high end server. Do you use the linked code or do you have a particular preprocessing ?Lantha
F
4

In addition to Vowpal Wabbit, gensim might be interesting as well - it too features online Latent Dirichlet Allocation.

Furl answered 18/9, 2012 at 11:52 Comment(1)
Actually I missed the unsupervised emphasis of the original question. Indeed out-of-core LDA and PCA from gensim might be very interesting for this case.Sero

© 2022 - 2024 — McMap. All rights reserved.