How can I reach a spark cluster in a Docker container with spark-submit and a python script?
Asked Answered
I

2

7

I've created a Spark cluster with one master and two slaves, each one on a Docker container. I launch it with the command start-all.sh.

I can reach the UI from my local machine at localhost:8080 and it shows me that the cluster is well launched : Screenshot of Spark UI

Then I try to submit a simple Python script from my host machine (not from the Docker container) with this command spark-submit : spark-submit --master spark://spark-master:7077 test.py

test.py :

import pyspark
conf = pyspark.SparkConf().setAppName('MyApp').setMaster('spark://spark-master:7077')
sc = pyspark.SparkContext(conf=conf)

But the console returned me this error :

22/01/26 09:20:39 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/01/26 09:20:40 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:109)
    at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.IOException: Failed to connect to spark-master:7077
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
    at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
    ... 4 more
Caused by: java.net.UnknownHostException: spark-master
    at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
    at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
    at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
    at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
    at java.base/java.net.InetAddress.getByName(InetAddress.java:1252)
    at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146)
    at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143)
    at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
    at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
    at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
    at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
    at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
    at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
    at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:202)
    at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:48)
    at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:182)
    at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:168)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
    at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
    at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
    at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
    at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:985)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:505)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:416)
    at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:475)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
    at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    ... 1 more

I also try with a simple scala script, just to try to reach the cluster but I've had the same error.

Do you have any idea how can I reach my cluster with a python script?

(Edit)

I forgot to specify i've created a docker network between my master and my slaves. So with the help of MrTshoot and Gaarv, i replace spark-master (in spark://spark-master:7077) by the ip of my master container (you can get it with the command docker network inspect my-network).

And it's work! Thanks!

Ishii answered 26/1, 2022 at 8:57 Comment(0)
J
2

When you specify .setMaster('spark://spark-master:7077') it means "reach spark cluster at DNS address "spark-master" and port 7077 which local machine cannot resolve.

So it order for your host machine to reach the cluster you must instead specify the Docker DNS / IP address of your Spark cluster, check "docker0" interface on your local machine and replace "spark-master" with it.

Jodijodie answered 26/1, 2022 at 9:25 Comment(2)
Need to resolve the reverse thing, my spark is getting connected but crashing immediately with as message that 'Failed to connect to docker-conatiner-0:41047' says unknown host. I tried to put the same container name with host machines ip in /etc/hosts of master node.Yeager
Hi, I have the same problem, can you elaborate on "check "docker0" interface on your local machine and replace "spark-master" with it."? I have no idea where to start.Innes
D
0

You can not connect Docker services on your host with its service name. you should set DNS or IP service or use following trick:

  1. Expose your Spark Cluster Ports.

  2. open your /etc/hosts and add following content on it

    127.0.0.1 localhost spark-master

    ::1 localhost spark-master

Documentation answered 26/1, 2022 at 9:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.