Twitter Data Analysis - Error in Term Document Matrix
Asked Answered
r
J

6

8

Trying to do some analysis of twitter data. Downloaded the tweets and created a corpus from the text of the tweets using the below

# Creating a Corpus
wim_corpus = Corpus(VectorSource(wimbledon_text)) 

In trying to create a TermDocumentMatrix as below, I am getting an error and warnings.

tdm = TermDocumentMatrix(wim_corpus, 
                       control = list(removePunctuation = TRUE, 
                                      stopwords =  TRUE, 
                                      removeNumbers = TRUE, tolower = TRUE)) 

Error in simple_triplet_matrix(i = i, j = j, v = as.numeric(v), nrow = length(allTerms),    : 'i, j, v' different lengths


In addition: Warning messages:
1: In parallel::mclapply(x, termFreq, control) :
 all scheduled cores encountered errors in user code
2: In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'
3: In TermDocumentMatrix.VCorpus(corpus) : invalid document identifiers
4: In simple_triplet_matrix(i = i, j = j, v = as.numeric(v), nrow = length(allTerms),  :
NAs introduced by coercion

Can anyone point to what this error indicates?Could this be related to the tm package?

The tm library has been imported. I am using R Version: R 3.0.1 and RStudio: 0.97

Jorge answered 29/8, 2013 at 7:18 Comment(1)
Can you reproduce this error with a small text file (some file you could share)?Brecher
E
11

I had the same problem and it turns out it is an issue with package compatibility. Try installing

install.packages("SnowballC")

and load with

library(SnowballC)

before calling DocumentTermMatrix.

It solved my problem.

Expellant answered 15/10, 2013 at 14:7 Comment(2)
Can you elaborate on why this is a solution?Moralez
I'm not sure about the details of the compatibility problem. Might have something to do with the recent update of slam? Did this not work?Expellant
P
7

I think the error is due to some "exotic" characters within the tweet messages, which the tm function cannot handle. I'v got the same error using tweets as a corpus source. Maybe the following workaround helps:

# Reading some tweet messages (here from a text file) into a vector

rawTweets <- readLines(con = "target_7_sample.txt", ok = TRUE, warn = FALSE, encoding = "utf-8") 

# Convert the tweet text explicitly into utf-8

convTweets <- iconv(rawTweets, to = "utf-8")

# The above conversion leaves you with vector entries "NA", i.e. those tweets that can't be handled. Remove the "NA" entries with the following command:

tweets <- (convTweets[!is.na(convTweets)])

If the deletion of some tweets is not an issue for your solution (e.g. build a word cloud) then this approach may work, and you can proceed by calling the Corpus function of the tm package.

Regards--Albert

Pirozzo answered 4/10, 2013 at 14:26 Comment(0)
F
6

I have found a way to solve this problem in an article about TM.

An example in which the error follows below:

getwd()
require(tm)

# Importing files
files <- DirSource(directory = "texts/",encoding ="latin1" )

# loading files and creating a Corpus
corpus <- VCorpus(x=files)

# Summary

summary(corpus)
corpus <- tm_map(corpus,removePunctuation)
corpus <- tm_map(corpus,stripWhitespace)
corpus <- tm_map(corpus,removePunctuation)
matrix_terms <- DocumentTermMatrix(corpus)
Warning messages:
In TermDocumentMatrix.VCorpus(x, control) : invalid document identifiers

This error occurs because you need an object of the class Vector Source to do your Term Document Matrix, but the previous transformations transform your corpus of texts in character, therefore, changing a class which is not accepted by the function.

However, if you add one more command before using the function TermDocumentMatrix you can keep going.

Below follows the code with the new command:

getwd()
require(tm)  

files <- DirSource(directory = "texts/",encoding ="latin1" )

# loading files and creating a Corpus
corpus <- VCorpus(x=files)

# Summary 
summary(corpus)
corpus <- tm_map(corpus,removePunctuation)
corpus <- tm_map(corpus,stripWhitespace)
corpus <- tm_map(corpus,removePunctuation)

# COMMAND TO CHANGE THE CLASS AND AVOID THIS ERROR
corpus <- Corpus(VectorSource(corpus))
matriz_terms <- DocumentTermMatrix(corpus)

Therefore, you won't have more problems with this.

Fondly answered 29/8, 2013 at 7:18 Comment(0)
E
3

As Albert suggested, converting the text encoding to "utf-8" solved the problem for me. But instead of removing the whole tweet with problematic characters, you can use the sub option in iconv to only remove the "bad" characters in a tweet and keep the rest:

tweets <- iconv(rawTweets, to = "utf-8", sub="")

This does not produce NAs anymore and no further filtration step is necessary.

Enzyme answered 27/11, 2013 at 19:3 Comment(0)
E
0

I think this problem happens because of some weird characters appear in the text. Here is my solution:

wim_corpus = tm_map(wim_corpus, str_replace_all,"[^[:alnum:]]", " ")


tdm = TermDocumentMatrix(wim_corpus, 
                       control = list(removePunctuation = TRUE, 
                                      stopwords =  TRUE, 
                                      removeNumbers = TRUE, tolower = TRUE))
Encouragement answered 8/5, 2014 at 5:0 Comment(0)
M
0

there were some german umlaut letters and some special fonts that were causing the errors. I could not remove them in R.. even by converting it to utf-8. (I am a new R user) so I used excel to remove the german letters and then there were no errors after..

Markus answered 15/7, 2014 at 4:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.