Pass tokens to CountVectorizer
Asked Answered
C

3

17

I have a text classification problem where i have two types of features:

  • features which are n-grams (extracted by CountVectorizer)
  • other textual features (e.g. presence of a word from a given lexicon). These features are different from n-grams since they should be a part of any n-gram extracted from the text.

Both types of features are extracted from the text's tokens. I want to run tokenization only once,and then pass these tokens to CountVectorizer and to the other presence features extractor. So, i want to pass a list of tokens to CountVectorizer, but is only accepts a string as a representation to some sample. Is there a way to pass an array of tokens?

Cyna answered 8/3, 2016 at 12:32 Comment(0)
L
22

Summarizing the answers of @user126350 and @miroli and this link:

from sklearn.feature_extraction.text import CountVectorizer

def dummy(doc):
    return doc

cv = CountVectorizer(
        tokenizer=dummy,
        preprocessor=dummy,
    )  

docs = [
    ['hello', 'world', '.'],
    ['hello', 'world'],
    ['again', 'hello', 'world']
]

cv.fit(docs)
cv.get_feature_names()
# ['.', 'again', 'hello', 'world']

The one thing to keep in mind is to wrap the new tokenized document into a list before calling the transform() function so that it is handled as a single document instead of interpreting each token as a document:

new_doc = ['again', 'hello', 'world', '.']
v_1 = cv.transform(new_doc)
v_2 = cv.transform([new_doc])

v_1.shape
# (4, 4)

v_2.shape
# (1, 4)
Lepp answered 17/10, 2018 at 12:45 Comment(0)
P
3

In general, you can pass a custom tokenizer parameter to CountVectorizer. The tokenizer should be a function that takes a string and returns an array of its tokens. However, if you already have your tokens in arrays, you can simply make a dictionary of the token arrays with some arbitrary key and have your tokenizer return from that dictionary. Then, when you run CountVectorizer, just pass your dictionary keys. For example,

 # arbitrary token arrays and their keys
 custom_tokens = {"hello world": ["here", "is", "world"],
                  "it is possible": ["yes it", "is"]}

 CV = CountVectorizer(
      # so we can pass it strings
      input='content',
      # turn off preprocessing of strings to avoid corrupting our keys
      lowercase=False,
      preprocessor=lambda x: x,
      # use our token dictionary
      tokenizer=lambda key: custom_tokens[key])

 CV.fit(custom_tokens.keys())
Peggi answered 17/8, 2016 at 1:10 Comment(0)
W
3

Similar to user126350's answer, but even simpler, here's what I did.

def do_nothing(tokens):
    return tokens

pipe = Pipeline([
    ('tokenizer', MyCustomTokenizer()),
    ('vect', CountVectorizer(tokenizer=do_nothing,
                             preprocessor=None,
                             lowercase=False))
])

doc_vects = pipe.transform(my_docs)  # pass list of documents as strings
Windbroken answered 4/5, 2018 at 13:9 Comment(1)
Great example for preprocessed and tokenized text!Alegar

© 2022 - 2024 — McMap. All rights reserved.