In reference to this question, the answer specify that the unsorted array takes more time because it fails the branch prediction test. but if we make a minor change in the program:
import java.util.Arrays;
import java.util.Random;
public class Main{
public static void main(String[] args) {
// Generate data
int arraySize = 32768;
int data[] = new int[arraySize];
Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c) {
data[c] = rnd.nextInt() % 256;
}
// !!! With this, the next loop runs faster
Arrays.sort(data);
// Test
long start = System.nanoTime();
long sum = 0;
for (int i = 0; i < 100000; ++i) {
// Primary loop
for (int c = 0; c < arraySize; ++c) {
if (data[c] >= 128) {
sum = data[c];
}
}
}
System.out.println((System.nanoTime() - start) / 1000000000.0);
System.out.println("sum = " + sum);
}
}
here I have replaced (from original question)
if (data[c] >= 128)
sum += data[c];
with
if (data[c] >= 128)
sum = data[c];
the unsorted array gives approx. the same result, I want to ask why branch prediction is not working in this case?
sum += data[c]
with or without sorting shows a perf difference (sorting 3x faster) but usingsum = data[c]
with or without sorting shows no perf difference. – Semioticsdata[c] = (c/8) % 256
and compare with(c*31) % 256;
– Mousetrapjmh
, not with OP's actual code. – Mousetrap