I had an earlier question about interpreting JMH output, which was mostly answered, but I updated the question with another related question, but it would be better to have this be a separate question.
This is the original question: Verify JMH measurements of simple for/lambda comparisons .
My question has to do with performance of streams at particular levels of "work". The following excerpted results from the previous question illustrates what I'm wondering about:
Benchmark Mode Cnt Score Error Units
MyBenchmark.shortLengthConstantSizeFor thrpt 200 132278188.475 ± 1132184.820 ops/s
MyBenchmark.shortLengthConstantSizeLambda thrpt 200 18750818.019 ± 171239.562 ops/s
MyBenchmark.mediumLengthConstantSizeFor thrpt 200 55447999.297 ± 277442.812 ops/s
MyBenchmark.mediumLengthConstantSizeLambda thrpt 200 15925281.039 ± 65707.093 ops/s
MyBenchmark.longerLengthConstantSizeFor thrpt 200 3551842.518 ± 42612.744 ops/s
MyBenchmark.longerLengthConstantSizeLambda thrpt 200 2791292.093 ± 12207.302 ops/s
MyBenchmark.longLengthConstantSizeFor thrpt 200 2984.554 ± 57.557 ops/s
MyBenchmark.longLengthConstantSizeLambda thrpt 200 331.741 ± 2.196 ops/s
I was expecting, as the tests moved from shorter lists to longer lists, that the performance of the stream test should approach the performance of the "for" test.
I saw that in the "short" list, the stream performance was 14% of the "for" performance. For the medium list, it was 29%. For the longer list, it was 78%. So far, the trend was what I was expecting. However, for the long list, it is 11%. For some reason, a list size of 300k, as opposed to 300, caused the performance of the stream to drop off, compared to the "for".
I was wondering if anyone could corroborate results like this, and whether they had any thoughts about why it might be happening.
I'm running this on a Win7 laptop with Java 8.