Unique values of factor column containing NAs => "Hash table is full" error
Asked Answered
A

4

20

I have a data.table with 57m records and 9 columns, one of which is causing a problem when I try to run some summary stats. The offending column is a factor with 3699 levels and I am receiveing an error from the following line of code:

    > unique(da$UPC)
    Error in unique.default(da$UPC): hash table is full

Now obviously I would just use: levels(da$UPC) but I am trying to count the unique values which exist in each group as part of multiple j parameters/caluclations in a data.table group statement.

Interestingly unique(da$UPC[1:1000000]) works as expected however unique(da$UPC[1:10000000]) does not. Given that my table has 57m records this is an issue.

I tried converting the factor to a character and that works no problem as follows:

    da$UPC = as.character(levels(da$UPC))[da$UPC]
    unique(da$UPC)

Doing this does show me an additional "level" which is NA. So because my data has some NAs in a factor column the unique function fails to work. I'm wondering if this is something which the developers are aware of an something which needs to be fixed? I found the following article on r-devel which might be relevant but I'm not sure and it does not mention data.table.

Linked article: unique(1:3,nmax=1) freezes R!

    sessionInfo:

    R version 3.0.1 (2013-05-16)
    Platform: x86_64-unknown-linux-gnu (64-bit)

    locale:
     [1] LC_CTYPE=C                    LC_NUMERIC=C
     [3] LC_TIME=en_US.iso88591        LC_COLLATE=C
     [5] LC_MONETARY=en_US.iso88591    LC_MESSAGES=en_US.iso88591
     [7] LC_PAPER=C                    LC_NAME=C
     [9] LC_ADDRESS=C                  LC_TELEPHONE=C
     [11] LC_MEASUREMENT=en_US.iso88591 LC_IDENTIFICATION=C

    attached base packages:
    [1] stats     graphics  grDevices utils     datasets  methods   base

    other attached packages:
    [1] plyr_1.8         data.table_1.8.8
Absentminded answered 4/12, 2013 at 20:8 Comment(11)
Please post your sessionInfo() and a reproducible example (in spite of the link).Eliga
Looking at unique.default, the error must be coming from the line factor(z, levels... since it works as character.Troy
OK I have posted the sessionInfo but making a reproducible example will take a little while longer.Absentminded
It's not clear what this has to do with data.table. You're calling unique(a vector) which'll call, as Senor points out, unique.default.Eliga
Correct @Arun, I will remove the data.table tag.Absentminded
@Eliga I'm afraid I have been unable to create a reproducible example as the fill data file cannot be posted here and do not have the time to do so. Should I leave the question open or remove it?Absentminded
There are 3 things I would try:Rosenblast
1) using unique with incomparables = TRUERosenblast
2) using unique with nmax = 4000 (nmax is the max expected unique elements and it might solve the hash table overload)Rosenblast
3) simply remove NAs after converting to character : unique(da$UPC[!is.na(da$UPC)])Rosenblast
You also have the option to try using uniqueN from the devel version of data.table, or perhaps unique(na.omit(...)).Revet
P
3

this snippet of code should place your missing observations into a regular level which will be more manageable to work with.

# Need additional level to place missing into first
levels(da$UPC) <- c(levels(da$UPC), '(NA)')
da$UPC[is.na(da$UPC)] <- '(NA)'

It sounds like you are ultimately trying to drop infrequent levels to assist in some sort of analysis. I wrote a function factorize() which I believe can help you. It buckets infrequent levels into an "Other" category.

Here's the link, please let me know if it helps.

[factorize()][1] https://github.com/greenpat/R-Convenience/blob/master/factorize.R

(reproduced below)

# This function takes a vector x and returns a factor representation of the same vector.
# The key advantage of factorize is that you can assign levels for infrequent categories,
# as well as empty and NA values. This makes it much easier to perform
# multidimensional/thematic analysis on your largest population subsets.
factorize <- function(
    x,  # vector to be transformed
    min_freq = .01,  # all levels < this % of records will be bucketed
    min_n = 1,  # all levels < this # of records will be bucketed
    NA_level = '(missing)',  # level created for NA values
    blank_level = '(blank)',  # level created for "" values
    infrequent_level = 'Other',  # level created for bucketing rare values
    infrequent_can_include_blank_and_NA = F,  # default NA and blank are not bucketed
    order = T,  # default to ordered
    reverse_order = F  # default to increasing order
) {
    if (class(x) != 'factor'){
        x <- as.factor(x)
    }
    # suspect this is faster than reassigning new factor object
    levels(x) <- c(levels(x), NA_level, infrequent_level, blank_level)

    # Swap out the NA and blank categories
    x[is.na(x)] <- NA_level
    x[x == ''] <- blank_level

    # Going to use this table to reorder
    f_tb <- table(x, useNA = 'always')

    # Which levels will be bucketed?
    infreq_set <- c(
        names(f_tb[f_tb < min_n]),
        names(f_tb[(f_tb/sum(f_tb)) < min_freq])
    )

    # If NA and/or blank were infrequent levels above, this prevents bucketing
    if(!infrequent_can_include_blank_and_NA){
        infreq_set <- infreq_set[!infreq_set %in% c(NA_level, blank_level)]
    }

    # Relabel all the infrequent choices
    x[x %in% infreq_set] <- infrequent_level

    # Return the reordered factor
    reorder(droplevels(x), rep(1-(2*reverse_order),length(x)), FUN = sum, order = order)
}
Phytology answered 25/9, 2017 at 12:47 Comment(0)
A
0

Could you use dplyr and get a different result? For instance, I set up some (small) fake data, and then determine the distinct levels of alpha. I don't know how well this scales though.

test <- data.frame(alpha=sample(c('a', 'b', 'c'), 100000, replace=TRUE), 
                  num=runif(100000))

uniqueAlpha <- distinct(select(test, alpha))
Auria answered 2/8, 2015 at 16:24 Comment(0)
E
0

Maybe I missing the point, but if it is a data.table object you can use this to summarize the counts:

da[,.N, by=UPC]

If it works, the unique values would be:

unique <- da[,.N, by=UPC]$UPC
length(unique)

You can group by multiple columns too:

da[,.N,by=.(A,B,C,..)]
Eulaliaeulaliah answered 11/5, 2017 at 20:49 Comment(0)
L
-1

Not sure it will solve the problem, but you can check Hadley Wickham's forcats package:

library(forcats)
fct_count(da$UPC)
Lacrimatory answered 30/11, 2016 at 22:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.