I'm new to spark. I'm coding a machine learning algorithm in Spark standalone (v3.0.0) with this configurations set:
SparkConf conf = new SparkConf();
conf.setMaster("local[*]");
conf.set("spark.driver.memory", "8g");
conf.set("spark.driver.maxResultSize", "8g");
conf.set("spark.memory.fraction", "0.6");
conf.set("spark.memory.storageFraction", "0.5");
conf.set("spark.sql.shuffle.partitions", "5");
conf.set("spark.memory.offHeap.enabled", "false");
conf.set("spark.reducer.maxSizeInFlight", "96m");
conf.set("spark.shuffle.file.buffer", "256k");
conf.set("spark.sql.debug.maxToStringFields", "100");
This is how I create the CrossValidator
ParamMap[] paramGrid = new ParamGridBuilder()
.addGrid(gbt.maxBins(), new int[]{50})
.addGrid(gbt.maxDepth(), new int[]{2, 5, 10})
.addGrid(gbt.maxIter(), new int[]{5, 20, 40})
.addGrid(gbt.minInfoGain(), new double[]{0.0d, .1d, .5d})
.build();
CrossValidator gbcv = new CrossValidator()
.setEstimator(gbt)
.setEstimatorParamMaps(paramGrid)
.setEvaluator(gbevaluator)
.setNumFolds(5)
.setParallelism(8)
.setSeed(session.getArguments().getTrainingRandom());
The problem is that when (in paramGrid) maxDepth is just {2, 5} and maxIter {5, 20} all works just fine, but when it is like in the code above it keeps logging:
WARN DAGScheduler: broadcasting large task binary with size xx
,
with xx going from 1000 KiB to 2.9 MiB, often leading to a timeout exception
Which spark parameters should i change to avoid this?