I have an oozie configuration:
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>batch_2</value>
</property>
<property>
<name>job.queue.name</name>
<value>batch_1</value>
</property>
</configuration>
<master>yarn-cluster</master>
<mode>cluster</mode>
<name>Batch Search Oozie</name>
<class>eu.inn.ilms.batchSearch.BatchSearch</class>
<jar>hdfs:///user/oozie/workflows/batchsearch/lib/batchSearch-0.0.1-SNAPSHOT.jar
</jar>
<arg>${zookeeperQuorum}</arg>
<arg>${solrQuery}</arg>
<arg>${hdfsFolderPaths}</arg>
<arg>${solrFinalCollection}</arg>
<arg>${mongoServiceUrl}</arg>
</spark>
The map-reduce job is executed on the queue that I want it to. But the spark job still execute on default. Is there a properties that will allow me to set this?
oozie.launcher.x.y.z
will be applied to the Oozie "launcher" (a dummy mapper) that is used to bootstrap shell / java / sqoop / spark actions, asx.y.z
; while action properties labeled directlyx.y.z
should be applied to the child Yarn job spawned by sqoop / spark -- unless the Spark driver has its own override rules... Also Oozie has some quirks and occasional regressions. – Maice