No matter how much I tinker with the settings in yarn-site.xml
i.e using all of the below options
yarn.scheduler.minimum-allocation-vcores
yarn.nodemanager.resource.memory-mb
yarn.nodemanager.resource.cpu-vcores
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.maximum-allocation-vcores
i just still cannot get my application i.e Spark to utilize all the cores on the cluster. The spark executors seem to be correctly taking up all the available memory, but each executor just keeps taking a single core and no more.
Here are the options configured in spark-defaults.conf
spark.executor.cores 3
spark.executor.memory 5100m
spark.yarn.executor.memoryOverhead 800
spark.driver.memory 2g
spark.yarn.driver.memoryOverhead 400
spark.executor.instances 28
spark.reducer.maxMbInFlight 120
spark.shuffle.file.buffer.kb 200
Notice that spark.executor.cores
is set to 3, but it doesn't work.
How do i fix this?