DBSCAN on spark : which implementation
Asked Answered
I

4

12

I would like to do some DBSCAN on Spark. I have currently found 2 implementations:

I have tested the first one with the sbt configuration given in its github but:

  • functions in the jar are not the same as those in the doc or in the source on github. For example, I cannot find the train function in the jar

  • I manage to run a test with the fit function (found in the jar) but a bad configuration of epsilon (a little to big) put the code in an infinite loop.

code :

val model = DBSCAN.fit(eps, minPoints, values, parallelism)

Has someone managed to do someting with the first library?

Has someone tested the second one?

Ixia answered 18/3, 2016 at 17:39 Comment(3)
Use ELKI with Cover tree. It's much faster, despite being single-node only. I tried one of the Spark versions and it went out of memory, but ELKI still worked fine and was fasr.Condiment
Another incomplete implementation of DBSCAN on Spark is github.com/mraad/dbscan-sparkAllveta
Can you put your code for running the first package here? I test the one-line code, it does not work for me. I use intelliJ but the package cannot be automatically downloaded using sbt. So I manually downloaded the jar file from here: dl.bintray.com/irvingc/maven/com/irvingc/spark/dbscan_2.10/…Cobbie
V
11

Please try ELKI. As this is Java, it should be easy to call from Scala.

ELKI is very well optimized, and with indexes it will scale to quite large data sets.

We tried to include one of these Spark implementations in our benchmarking study - but it ran out of memory (and it was the only implementation to run out of memory ... the k-means of Spark and Mahout were also among the slowest):

Hans-Peter Kriegel, Erich Schubert, and Arthur Zimek.
The (black) art of runtime evaluation: Are we comparing algorithms or implementations?
In: Knowledge and Information Systems (KAIS). 2016, 1–38

Professor Neukirchen benchmarked parallel implementations of DBSCAN in this technical report:

Helmut Neukirchen
Survey and Performance Evaluation of DBSCAN Spatial Clustering Implementations for Big Data and High-Performance Computing Paradigms

apparently he got some of the Spark implementations working, but noted that:

The result is devastating: none of the implementations for Apache Spark is anywhere near to the HPC implementations. In particular on bigger (but still rather small) data sets, most of them fail completely and do not even deliver correct results.

and earlier:

When running any of the "Spark DBSCAN" implementations while making use of all available cores of our cluster, we experienced out-of-memory exceptions.

(also, "Spark DBSCAN" took 2406 seconds on 928 cores, ELKI took 997 seconds on 1 core for the smaller benchmark - the other Spark implementation didn't fare too well either, in particular it did not return the correct result...)

"DBSCAN on Spark" did not crash, but returned completely wrong clusters.

While "DBSCAN on Spark" finishes faster, it delivered completely wrong clustering results. Due to the hopelessly long run-times of the DBSCAN implementations for Spark already with the maximum number of cores, we did not perform measurements with a lower number of cores.

You can wrap a double[][] array as ELKI database:

// Adapter to load data from an existing array.
DatabaseConnection dbc = new ArrayAdapterDatabaseConnection(data);
// Create a database (which may contain multiple relations!)
Database db = new StaticArrayDatabase(dbc, null);
// Load the data into the database (do NOT forget to initialize...)
db.initialize();

// Squared Euclidean is faster than Euclidean.
Clustering<Model> c = new DBSCAN<NumberVector>(
  SquaredEuclideanDistanceFunction.STATIC, eps*eps, minpts).run(db);

for(Cluster<KMeansModel> clu : c.getAllClusters()) {
  // Process clusters
}

See also: Java API example (in particular, how to map DBIDs back to row indexes). For better performance, pass an index factory (such as new CoverTree.Factory(...)) as second parameter to the StaticArrayDatabase constructor.

Volsci answered 4/5, 2017 at 15:13 Comment(4)
For this example, could you give how exactly the new CoverTree.Factory looks like?Ordeal
@PaulZWu You can find a full example with index in the regular ELKI tutorial sources: github.com/elki-project/elki/blob/master/addons/tutorial/src/…Volsci
Thank you. I figured it out about a year ago. Your work is awesome -- we are using this package for an interesting project and its performance is impressive (with the indexing).Ordeal
I think this example is not valid anymore. The run method requires a Relation<O> not a Database object...Gaudreau
H
5

I successfully use the second library (https://github.com/alitouka/spark_dbscan) in my project.Actually,I can't use it as follows:

libraryDependencies += "org.alitouka" % "spark_dbscan_2.10" % "0.0.4"

resolvers += "Aliaksei Litouka's repository" at "http://alitouka-public.s3-website-us-east-1.amazonaws.com/"

Instead, I download the code and update it to spark 2.2.1 version.Besides,some of the libraries should be added.Finally, add the code to my project, it works!

Hitandmiss answered 9/3, 2018 at 7:34 Comment(1)
Some libraries should update,for example, change "import org.apache.spark.Logging" to "import org.apache.spark.internal.Logging"Hitandmiss
Z
2

I tested https://github.com/irvingc/dbscan-on-spark and can say that it consumes a lot of memory. For 400K dataset with smooth distribution i used -Xmx12084m and even in this case it works too long (>20 min). In addition, it is only fo 2D. I used project with maven, not sbt.

I tested also second implementation. This is still the best that I found. Unfortunately, the author does not support it since 2015. It really took some time to raise the version of the Spark and resolve the version conflicts. I needed it to deploy on aws.

Zetland answered 9/2, 2018 at 8:59 Comment(0)
G
1

You can also consider using smile which provides an implementation of DBSCAN. You would have to use groupBy combined with either mapGroups or flatMapGroups in the most direct way and you would run dbscan there. Here's an example:

  import smile.clustering._

  val dataset: Array[Array[Double]] = Array(
    Array(100, 100),
    Array(101, 100),
    Array(100, 101),
    Array(100, 100),
    Array(101, 100),
    Array(100, 101),

    Array(0, 0),
    Array(1, 0),
    Array(1, 2),
    Array(1, 1)
  )

  val dbscanResult = dbscan(dataset, minPts = 3, radius = 5)
  println(dbscanResult)

  // output
  DBSCAN clusters of 10 data points:
  0     6 (60.0%)
  1     4 (40.0%)
  Noise     0 ( 0.0%)

You can also write a User Defined Aggregate Function (UDAF) if you need to eek out more performance.

I use this approach at work to do clustering of time-series data, so grouping using Spark's time window function and then being able to execute DBSCAN within each window allows us to parallelize the implementation.

I was inspired by the following article to do this

Gimel answered 23/1, 2019 at 2:13 Comment(1)
do you have a link to a build file that you used for smile?Cabaret

© 2022 - 2024 — McMap. All rights reserved.