The parameter "mapred.min.split.size" changes the size of the block in which the file was written earlier? Assuming a situation where I, when starting my JOB, pass the parameter "mapred.min.split.size" with a value of 134217728 (128MB). What is correct to say about what happens?
1 - Each MAP process the equivalent of 2 HDFS blocks (assuming each block 64MB);
2 - There will be a new division of my input file (previously included HDFS) to occupy blocks in HDFS 128M;