How can I keep multi-word names in tokenization together?
Asked Answered
B

1

5

I want to classify documents using TF-IDF features. One way to do it:

from sklearn.feature_extraction.text import TfidfVectorizer
import string
import re
import nltk


def tokenize(document):
    document = document.lower()
    for punct_char in string.punctuation:
        document = document.replace(punct_char, " ")
    document = re.sub('\s+', ' ', document).strip()

    tokens = document.split(" ")

    # Contains more than I want:
    # from spacy.lang.de.stop_words import STOP_WORDS
    stopwords = nltk.corpus.stopwords.words('german')
    tokens = [token for token in tokens if token not in stopwords]
    return tokens

# How I intend to use it
transformer = TfidfVectorizer(tokenizer=tokenize)

example = "Jochen Schweizer ist eines der interessantesten Unternehmen der Welt, hat den Sitz allerdings nicht in der Schweizerischen Eidgenossenschaft."
transformer.fit([example])

# Example of the tokenizer
print(tokenize(example))

One flaw of this tokenizer is that it splits words that belong together: "Jochen Schweizer" and "schweizerische Eidgenossenschaft". Also lemmatization (word stemming) is missing. I would like to get the following tokens:

["Jochen Schweizer", "interessantesten", "unternehmen", "Welt", "Sitz", "allerdings", "nicht", "Schweizerische Eidgenossenschaft"]

I know that Spacy can identify those named entities (NER):

import en_core_web_sm  # python -m spacy download en_core_web_sm --user
parser = en_core_web_sm.load()
doc = parser(example)
print(doc.ents)  # (Jochen Schweizer, Welt, Sitz)

Is there a good way to use spacy to tokenize in a way that keeps the named entity words together?

Barby answered 9/10, 2019 at 8:2 Comment(1)
Since you are working with German instead of English language, perhaps it would be better to use a German language model (e.g. parser = spacy.load('de_core_news_sm'))Thenar
T
6

How about this:

with doc.retokenize() as retokenizer:
    for ent in doc.ents:
        retokenizer.merge(doc[ent.start:ent.end])

In fact, you can use spacy to remove punctuations & stop words, and perform lemmatization too!

parser = spacy.load('de_core_news_sm')
def tokenize(text):
    doc = parser(text)
    with doc.retokenize() as retokenizer:
        for ent in doc.ents:
            retokenizer.merge(doc[ent.start:ent.end], attrs={"LEMMA": ent.text})
    return [x.lemma_ for x in doc if not x.is_punct and not x.is_stop]

Example:

>>> text = "Jochen Schweizer ist eines der interessantesten Unternehmen der Welt, hat den Sitz allerdings nicht in der Schweizerischen Eidgenossenschaft."
>>> print(tokenize(text))
>>> [u'Jochen Schweizer', u'interessant', u'Unternehmen', u'Welt', u'Sitz', u'Schweizerischen Eidgenossenschaft']
Thenar answered 9/10, 2019 at 8:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.