Airflow SparkSubmitOperator - How to spark-submit in another server
Asked Answered
F

1

24

I am new to Airflow and Spark and I am struggling with the SparkSubmitOperator.

Our airflow scheduler and our hadoop cluster are not set up on the same machine (first question: is it a good practice?).

We have many automatic procedures that need to call pyspark scripts. Those pyspark scripts are stored in the hadoop cluster (10.70.1.35). The airflow dags are stored in the airflow machine (10.70.1.22).

Currently, when we want to spark-submit a pyspark script with airflow, we use a simple BashOperator as follows:

cmd = "ssh [email protected] spark-submit \
   --master yarn \
   --deploy-mode cluster \
   --executor-memory 2g \
   --executor-cores 2 \
   /home/hadoop/pyspark_script/script.py"
t = BashOperator(task_id='Spark_datamodel',bash_command=cmd,dag=dag)

It works perfectly fine. But we would like to start using SparkSubmitOperator to spark submit our pyspark scripts.

I tried this:

from airflow import DAG
from datetime import timedelta, datetime
from airflow.contrib.operators.spark_submit_operator import 
SparkSubmitOperator
from airflow.operators.bash_operator import BashOperator
from airflow.models import Variable

dag = DAG('SPARK_SUBMIT_TEST',start_date=datetime(2018,12,10), 
schedule_interval='@daily')


sleep = BashOperator(task_id='sleep', bash_command='sleep 10',dag=dag)

_config ={'application':'[email protected]:/home/hadoop/pyspark_script/test_spark_submit.py',
    'master' : 'yarn',
    'deploy-mode' : 'cluster',
    'executor_cores': 1,
    'EXECUTORS_MEM': '2G'
}

spark_submit_operator = SparkSubmitOperator(
    task_id='spark_submit_job',
    dag=dag,
    **_config)

sleep.set_downstream(spark_submit_operator) 

The syntax should be ok as the dag does not show up as broken. But when it runs it gives me the following error:

[2018-12-14 03:26:42,600] {logging_mixin.py:95} INFO - [2018-12-14 
03:26:42,600] {base_hook.py:83} INFO - Using connection to: yarn
[2018-12-14 03:26:42,974] {logging_mixin.py:95} INFO - [2018-12-14 
03:26:42,973] {spark_submit_hook.py:283} INFO - Spark-Submit cmd: 
['spark-submit', '--master', 'yarn', '--executor-cores', '1', '--name', 
'airflow-spark', '--queue', 'root.default', 
'[email protected]:/home/hadoop/pyspark_script/test_spark_submit.py']
[2018-12-14 03:26:42,977] {models.py:1760} ERROR - [Errno 2] No such 
file or directory: 'spark-submit'
Traceback (most recent call last):
      File "/home/dataetl/anaconda3/lib/python3.6/site- 
   packages/airflow/models.py", line 1659, in _run_raw_task    
    result = task_copy.execute(context=context)
      File "/home/dataetl/anaconda3/lib/python3.6/site- 
   packages/airflow/contrib/operators/spark_submit_operator.py", line 
168, 
    in execute
        self._hook.submit(self._application)
      File "/home/dataetl/anaconda3/lib/python3.6/site- 
   packages/airflow/contrib/hooks/spark_submit_hook.py", line 330, in 
submit
        **kwargs)
      File "/home/dataetl/anaconda3/lib/python3.6/subprocess.py", line 
707, 
    in __init__
        restore_signals, start_new_session)
      File "/home/dataetl/anaconda3/lib/python3.6/subprocess.py", line 
    1326, in _execute_child
        raise child_exception_type(errno_num, err_msg)
    FileNotFoundError: [Errno 2] No such file or directory: 'spark-submit'

Here are my questions:

  1. Should I install spark hadoop on my airflow machine? I'm asking because in this topic I read that I need to copy hdfs-site.xml and hive-site.xml. But as you can imagine, I have neither /etc/hadoop/ nor /etc/hive/ directories on my airflow machine.

  2. a) If no, where exactly should I copy hdfs-site.xml and hive-site.xml on my airflow machine?

  3. b) If yes, does it mean that I need to configure my airflow machine as a client? A kind of edge node that does not participate in jobs but can be used to submit actions?

  4. Then, will I be able to spark-submit from my airflow machine? If yes, then I don't need to create a connection on Airflow like I do for a mysql database for example, right?

  5. Oh and the cherry on the cake: will I be able to store my pyspark scripts in my airflow machine and spark-submit them from this same airflow machine. It would be amazing!

Any comment would be very useful, even if you're not able to answer all my questions...

Thanks in advance anyway! :)

Florin answered 14/12, 2018 at 4:50 Comment(1)
" I have neither /etc/hadoop/ nor /etc/hive/ directories on my airflow machine" >> when you install Spark-with-Hadoop-libs on a server that doesn't have the full Hadoop client, you must have the *-site.xml config present in a directory that is present in the CLASSPATH ; when using spark-submit it's enough to set $HADOOP_CONF_DIR and let the script manage CLASSPATH by itselfDougald
S
23

To answer your first question, yes it is a good practice.

For how you can use SparkSubmitOperator, please refer to my answer on https://mcmap.net/q/583494/-is-there-a-way-to-submit-spark-job-on-different-server-running-master

  1. Yes, you need spark-binaries on airflow machine.
  2. -
  3. Yes
  4. No -> You still need a connection to tell Airflow where have you installed your spark binary files. Similar to https://mcmap.net/q/583495/-unable-to-execute-spark-job-using-sparksubmitoperator
  5. Should work
Syllabize answered 14/12, 2018 at 10:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.