Here's an appeal for a better way to do something that I can already do inefficiently: filter a series of n-gram tokens using "stop words" so that the occurrence of any stop word term in an n-gram triggers removal.
I'd very much like to have one solution that works for both unigrams and n-grams, although it would be ok to have two versions, one with a "fixed" flag and one with a "regex" flag. I'm putting the two aspects of the question together since someone may have a solution that tries a different approach that addresses both fixed and regular expression stopword patterns.
Formats:
tokens are a list of character vectors, which may be unigrams, or n-grams concatenated by a
_
(underscore) character.stopwords are a character vector. Right now I am content to let this be a fixed string, but it would be a nice bonus to be able to implement this using regular expression formatted stopwords too.
Desired Output: A list of characters matching the input tokens but with any component token matching a stop word being removed. (This means a unigram match, or a match to one of the terms which the n-gram comprises.)
Examples, test data, and working code and benchmarks to build on:
tokens1 <- list(text1 = c("this", "is", "a", "test", "text", "with", "a", "few", "words"),
text2 = c("some", "more", "words", "in", "this", "test", "text"))
tokens2 <- list(text1 = c("this_is", "is_a", "a_test", "test_text", "text_with", "with_a", "a_few", "few_words"),
text2 = c("some_more", "more_words", "words_in", "in_this", "this_text", "text_text"))
tokens3 <- list(text1 = c("this_is_a", "is_a_test", "a_test_text", "test_text_with", "text_with_a", "with_a_few", "a_few_words"),
text2 = c("some_more_words", "more_words_in", "words_in_this", "in_this_text", "this_text_text"))
stopwords <- c("is", "a", "in", "this")
# remove any single token that matches a stopword
removeTokensOP1 <- function(w, stopwords) {
lapply(w, function(x) x[-which(x %in% stopwords)])
}
# remove any word pair where a single word contains a stopword
removeTokensOP2 <- function(w, stopwords) {
matchPattern <- paste0("(^|_)", paste(stopwords, collapse = "(_|$)|(^|_)"), "(_|$)")
lapply(w, function(x) x[-grep(matchPattern, x)])
}
removeTokensOP1(tokens1, stopwords)
## $text1
## [1] "test" "text" "with" "few" "words"
##
## $text2
## [1] "some" "more" "words" "test" "text"
removeTokensOP2(tokens1, stopwords)
## $text1
## [1] "test" "text" "with" "few" "words"
##
## $text2
## [1] "some" "more" "words" "test" "text"
removeTokensOP2(tokens2, stopwords)
## $text1
## [1] "test_text" "text_with" "few_words"
##
## $text2
## [1] "some_more" "more_words" "text_text"
removeTokensOP2(tokens3, stopwords)
## $text1
## [1] "test_text_with"
##
## $text2
## [1] "some_more_words"
# performance benchmarks for answers to build on
require(microbenchmark)
microbenchmark(OP1_1 = removeTokensOP1(tokens1, stopwords),
OP2_1 = removeTokensOP2(tokens1, stopwords),
OP2_2 = removeTokensOP2(tokens2, stopwords),
OP2_3 = removeTokensOP2(tokens3, stopwords),
unit = "relative")
## Unit: relative
## expr min lq mean median uq max neval
## OP1_1 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 100
## OP2_1 5.119066 3.812845 3.438076 3.714492 3.547187 2.838351 100
## OP2_2 5.230429 3.903135 3.509935 3.790143 3.631305 2.510629 100
## OP2_3 5.204924 3.884746 3.578178 3.753979 3.553729 8.240244 100
grepl
for long vectors written in c. yes I was hoping someone would write that, too :} @Rcore – Stratagemgrepl
is already vectorized and written in c.stringi::stri_detect_regex
andstringi::stri_detect_fixed
are both faster and worth checking out. – Vasiliux <- rep('a', 1e7); system.time(grepl('a', x, fixed = TRUE)); system.time(stri_detect_fixed('a', x))
) but the real heavy lifting is going through all the combinations which seems exponentially slower when you have a lot of patterns to match (adding targets is almost trivial) – Stratagem