TfidfVectorizer in scikit-learn : ValueError: np.nan is an invalid document
Asked Answered
C

3

69

I'm using TfidfVectorizer from scikit-learn to do some feature extraction from text data. I have a CSV file with a Score (can be +1 or -1) and a Review (text). I pulled this data into a DataFrame so I can run the Vectorizer.

This is my code:

import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer

df = pd.read_csv("train_new.csv",
             names = ['Score', 'Review'], sep=',')

# x = df['Review'] == np.nan
#
# print x.to_csv(path='FindNaN.csv', sep=',', na_rep = 'string', index=True)
#
# print df.isnull().values.any()

v = TfidfVectorizer(decode_error='replace', encoding='utf-8')
x = v.fit_transform(df['Review'])

This is the traceback for the error I get:

Traceback (most recent call last):
  File "/home/PycharmProjects/Review/src/feature_extraction.py", line 16, in <module>
x = v.fit_transform(df['Review'])
 File "/home/b/hw1/local/lib/python2.7/site-   packages/sklearn/feature_extraction/text.py", line 1305, in fit_transform
   X = super(TfidfVectorizer, self).fit_transform(raw_documents)
 File "/home/b/work/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 817, in fit_transform
self.fixed_vocabulary_)
 File "/home/b/work/local/lib/python2.7/site- packages/sklearn/feature_extraction/text.py", line 752, in _count_vocab
   for feature in analyze(doc):
 File "/home/b/work/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 238, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
 File "/home/b/work/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 118, in decode
 raise ValueError("np.nan is an invalid document, expected byte or "
 ValueError: np.nan is an invalid document, expected byte or unicode string.

I checked the CSV file and DataFrame for anything that's being read as NaN but I can't find anything. There are 18000 rows, none of which return isnan as True.

This is what df['Review'].head() looks like:

  0    This book is such a life saver.  It has been s...
  1    I bought this a few times for my older son and...
  2    This is great for basics, but I wish the space...
  3    This book is perfect!  I'm a first time new mo...
  4    During your postpartum stay at the hospital th...
  Name: Review, dtype: object
Critchfield answered 3/9, 2016 at 6:26 Comment(4)
Could you display the head of df['Review'] as it looks to me related to the encoding of the text inside your dataframe more than anything else?Stepheniestephens
Sure. I just edited my post.Critchfield
And also type(df['Review'].iloc[0])?Stepheniestephens
type(df['Review'].iloc[0]) gives me <type 'str'>Critchfield
S
156

You need to convert the dtype object to unicode string as is clearly mentioned in the traceback.

x = v.fit_transform(df['Review'].values.astype('U'))  ## Even astype(str) would work

From the Doc page of TFIDF Vectorizer:

fit_transform(raw_documents, y=None)

Parameters: raw_documents : iterable
an iterable which yields either str, unicode or file objects

Stepheniestephens answered 3/9, 2016 at 16:1 Comment(4)
This worked. Thank you so much. Marking it as the correct answer.Critchfield
My memory usage explodes I use astype('U'), This seems to be a major issue when I use large CSVs. Any ideas why this happens and if there's a workaround?Caceres
"as is clearly mentioned in the traceback" is an unhelpful slight to the person asking the questionInductance
Couldn't agree with David more, this remediation is certainly not obvious from the tracebackPachyderm
P
26

I find a more efficient way to solve this problem.

x = v.fit_transform(df['Review'].apply(lambda x: np.str_(x)))

Of course you can use df['Review'].values.astype('U') to convert the entire Series. But I found using this function will consume much more memory if the Series you want to convert is really big. (I test this with a Series with 800k rows of data, and doing this astype('U') will consume about 96GB of memory)

Instead, if you use the lambda expression to only convert the data in the Series from str to numpy.str_, which the result will also be accepted by the fit_transform function, this will be faster and will not increase the memory usage.

I'm not sure why this will work because in the Doc page of TFIDF Vectorizer:

fit_transform(raw_documents, y=None)

Parameters: raw_documents : iterable

an iterable which yields either str, unicode or file objects

But actually this iterable must yields np.str_ instead of str.

Puzzler answered 18/4, 2019 at 9:0 Comment(1)
thank you so much for the solution. Would you kindly explain it a little more on the iterable bit between np.str_ and str? I'm green and confused, I thought str is iterable?Apriorism
B
10

I was getting MemoryError even after using .values.astype('U') for the reviews in my dataset.

So i tried .astype('U').values and it worked.

This is a answer from: Python: how to avoid MemoryError when transform text data into Unicode using astype('U')

Berg answered 31/5, 2020 at 22:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.