clustering very large dataset in R
Asked Answered
F

3

12

I have a dataset consisting of 70,000 numeric values representing distances ranging from 0 till 50, and I want to cluster these numbers; however, if I'm trying the classical clustering approach, then I would have to establish a 70,000X70,000 distance matrix representing the distances between each two numbers in my dataset, which won't fit in memory, so I was wondering if there is any smart way to solve this problem without the need to do stratified sampling? I also tried bigmemory and big analytics libraries in R but still can't fit the data into memory

Fontanel answered 24/2, 2014 at 10:24 Comment(2)
Is this solution (using cluster::clara) relevant/useful?Frankforter
no not really cause the problem is that the distance matrix will be too large to fit into any memoryFontanel
E
5

You can use kmeans, which normally suitable for this amount of data, to calculate an important number of centers (1000, 2000, ...) and perform a hierarchical clustering approach on the coordinates of these centers.Like this the distance matrix will be smaller.

## Example
# Data
x <- rbind(matrix(rnorm(70000, sd = 0.3), ncol = 2),
           matrix(rnorm(70000, mean = 1, sd = 0.3), ncol = 2))
colnames(x) <- c("x", "y")

# CAH without kmeans : dont work necessarily
library(FactoMineR)
cah.test <- HCPC(x, graph=FALSE, nb.clust=-1)

# CAH with kmeans : work quickly
cl <- kmeans(x, 1000, iter.max=20)
cah <- HCPC(cl$centers, graph=FALSE, nb.clust=-1)
plot.HCPC(cah, choice="tree")
Elan answered 24/2, 2014 at 13:25 Comment(1)
using your method and after running cah <- HCPC(cl$centers, graph=FALSE, nb.clust=-1) I get this error : Error in catdes(data.clust, ncol(data.clust), proba = proba, row.w = res.sauv$call$row.w.init) : object 'data.clust' not foundFetter
G
19

70000 is not large. It's not small, but it's also not particularly large... The problem is the limited scalability of matrix-oriented approaches.

But there are plenty of clustering algorithms which do not use matrixes and do no need O(n^2) (or even worse, O(n^3)) runtime.

You may want to try ELKI, which has great index support (try the R*-tree with SortTimeRecursive bulk loading). The index support makes it a lot lot lot faster.

If you insist on using R, give at least kmeans a try and the fastcluster package. K-means has runtime complexity O(n*k*i) (where k is the parameter k, and i is the number of iterations); fastcluster has an O(n) memory and O(n^2) runtime implementation of single-linkage clustering comparable to the SLINK algorithm in ELKI. (The R "agnes" hierarchical clustering will use O(n^3) runtime and O(n^2) memory).

Implementation matters. Often, implementations in R aren't the best IMHO, except for core R which usually at least has a competitive numerical precision. But R was built by statisticians, not by data miners. It's focus is on statistical expressiveness, not on scalability. So the authors aren't to blame. It's just the wrong tool for large data.

Oh, and if your data is 1-dimensional, don't use clustering at all. Use kernel density estimation. 1 dimensional data is special: it's ordered. Any good algorithm for breaking 1-dimensional data into inverals should exploit that you can sort the data.

Gower answered 24/2, 2014 at 14:18 Comment(0)
E
5

You can use kmeans, which normally suitable for this amount of data, to calculate an important number of centers (1000, 2000, ...) and perform a hierarchical clustering approach on the coordinates of these centers.Like this the distance matrix will be smaller.

## Example
# Data
x <- rbind(matrix(rnorm(70000, sd = 0.3), ncol = 2),
           matrix(rnorm(70000, mean = 1, sd = 0.3), ncol = 2))
colnames(x) <- c("x", "y")

# CAH without kmeans : dont work necessarily
library(FactoMineR)
cah.test <- HCPC(x, graph=FALSE, nb.clust=-1)

# CAH with kmeans : work quickly
cl <- kmeans(x, 1000, iter.max=20)
cah <- HCPC(cl$centers, graph=FALSE, nb.clust=-1)
plot.HCPC(cah, choice="tree")
Elan answered 24/2, 2014 at 13:25 Comment(1)
using your method and after running cah <- HCPC(cl$centers, graph=FALSE, nb.clust=-1) I get this error : Error in catdes(data.clust, ncol(data.clust), proba = proba, row.w = res.sauv$call$row.w.init) : object 'data.clust' not foundFetter
F
0

Another non-matrix oriented approach, at least for visualizing cluster in big data, is the largeVis algorithm by Tang et al. (2016). The largeVis R package has unfortunately been orphaned on CRAN due to lacking package maintenance, but a (maintained?) version can still be compiled from its gitHub repository via (having installed Rtools), e.g.,

library(devtools)     
install_github(repo = "elbamos/largeVis")

A python version of the package exists as well. The underlying algorithm uses segmentation trees and a neigbourhood refinement to find the K most similar instances for each observation and then projects the resulting neigbourhood network into dim lower dimensions. Its been implemented in C++ and uses OpenMP (if supported while compiling) for multi-processing; it has thus been sufficiently fast for clustering any larger data sets I have tested so far.

Faison answered 2/10, 2018 at 20:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.