Which HBase connector for Spark 2.0 should I use? [closed]
Asked Answered
H

2

12

Our stack is composed of Google Data Proc (Spark 2.0) and Google BigTable (HBase 1.2.0) and I am looking for a connector working with these versions.

The Spark 2.0 and the new DataSet API support is not clear to me for the connectors I have found:

The project is written in Scala 2.11 with SBT.

Thanks for your help

Harkins answered 1/12, 2016 at 11:0 Comment(0)
K
7

Update: SHC now seems to work with Spark 2 and the Table API. See https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/scala/bigtable-shc

Original answer:

I don't believe any of these (or any other existing connector) will do all that you would like today.

  • spark-hbase will probably the right solution when it is release (HBase 1.4?), but currently only builds at head and is still working on Spark 2 support.
  • spark-hbase-connector only seems to support RDD APIs, but since they are more stable, might be somewhat helpful.
  • hortonworks-spark/shc probably won't work because I believe it only supports Spark 1 and uses the older HTable APIs which do not work with BigTable.

I would recommend just using HBase MapReduce APIs with RDD methods like newAPIHadoopRDD (or possibly the spark-hbase-connector?). Then manually convert RDDs into DataSets. This approach is a lot easier in Scala or Java than Python.

This is an area that the HBase community is working to improve and Google Cloud Dataproc will incorporate those improvements as they happen.

Kaete answered 1/12, 2016 at 19:33 Comment(5)
Thanks for your help, this what I have done for read and it works quite well with spark.sparkContext.newAPIHadoopRDD(config, classOf[TableInputFormat],classOf[ImmutableBytesWritable],classOf[Result]). How should I use this API for bulk writes ?Harkins
Simply with saveAsNewAPIHadoopDataset(...)Harkins
Looks like hortonworks released a version for Spark 2: github.com/hortonworks-spark/shc/tree/v1.0.1-2.0Sapowith
Does spark-hbase compatible with Scala 2.11 ? I think it is built for Scala 2.10 repository.apache.org/content/repositories/snapshots/org/apache/…Dominique
any update on this again? I want to sort hbase (non-rowkey) column to get rowkeys corresponding to top ten column values. Will doing this in spark using spark-hbase connector run fast?Ogham
P
2

In addition to the above answer, using newAPIHadoopRDD means that, you get all the data from HBase and from then on, its all core spark. You would not get any HBase specific API like Filters etc. And the current spark-hbase, only snapshots are available.

Planimetry answered 1/12, 2016 at 21:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.