How to specify multiple dependencies using --packages for spark-submit?
Asked Answered
H

3

30

I have the following as the command line to start a spark streaming job.

    spark-submit --class com.biz.test \
            --packages \
                org.apache.spark:spark-streaming-kafka_2.10:1.3.0 \
                org.apache.hbase:hbase-common:1.0.0 \
                org.apache.hbase:hbase-client:1.0.0 \
                org.apache.hbase:hbase-server:1.0.0 \
                org.json4s:json4s-jackson:3.2.11 \
            ./test-spark_2.10-1.0.8.jar \
            >spark_log 2>&1 &

The job fails to start with the following error:

Exception in thread "main" java.lang.IllegalArgumentException: Given path is malformed: org.apache.hbase:hbase-common:1.0.0
    at org.apache.spark.util.Utils$.resolveURI(Utils.scala:1665)
    at org.apache.spark.deploy.SparkSubmitArguments.parse$1(SparkSubmitArguments.scala:432)
    at org.apache.spark.deploy.SparkSubmitArguments.parseOpts(SparkSubmitArguments.scala:288)
    at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:87)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:105)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

I've tried removing the formatting and returning to a single line, but that doesn't resolve the issue. I've also tried a bunch of variations: different versions, added _2.10 to the end of the artifactId, etc.

According to the docs (spark-submit --help):

The format for the coordinates should be groupId:artifactId:version.

So what I have should be valid and should reference this package.

If it helps, I'm running Cloudera 5.4.4.

What am I doing wrong? How can I reference the hbase packages correctly?

Hiltonhilum answered 25/11, 2015 at 23:10 Comment(1)
Is it working fine? In my case I had to add jars via --jars and --driver-class-path also.Kernan
R
63

A list of packages should be separated using commas without whitespaces (breaking lines should work just fine) for example

--packages  org.apache.spark:spark-streaming-kafka_2.10:1.3.0,\
  org.apache.hbase:hbase-common:1.0.0
Remission answered 25/11, 2015 at 23:15 Comment(1)
I found I also had to remove the spaces and line breaks in order to get it to work successfully: --packages org.apache.spark:spark-streaming-kafka_2.10:1.3.0,org.apache.hbase:hbase-common:1.0.0...Hiltonhilum
G
5

I found it worthy to use SparkSession in spark version 3.0.0 for mysql and postgres

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('mysql-postgres').config('spark.jars.packages', 'mysql:mysql-connector-java:8.0.20,org.postgresql:postgresql:42.2.16').getOrCreate()
Grapple answered 14/10, 2020 at 9:47 Comment(1)
i never heard of spark.jars.packages before and I was a top-end developer (including multiple spark-sql and mllib contribs) from 2014 to 2019Afterburning
O
1

@Mohammad thanks for this input. This worked for me too. I had to load the Kafka and msql packages in a single sparksession. I did something like this:

spark = (SparkSession .builder ... .appName('myapp') # Add kafka and msql package .config("spark.jars.packages", "org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2,mysql:mysql-connector-java:8.0.26") .getOrCreate())

Oaten answered 4/9, 2021 at 4:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.