Interpreters do a lot of extra work, so it is understandable that they end up significantly slower than native machine code. But languages such as C# or Java have JIT compilers, which supposedly compile to platform native machine code.
And yet, according to benchmarks that seem legit enough, in most of the cases are still 2-4x times slower than C/C++? Of course, I mean compared to equally optimized C/C++ code. I am well aware of the optimization benefits of JIT compilation and their ability to produce code that is faster than poorly optimized C+C++.
And after all that noise about how good the Java memory allocation is, why such a horrendous memory usage? 2x to 50x, on average about 30x times more memory is being used across that particular benchmark suite, which is nothing to sneeze at...
NOTE that I don't want to start a WAR, I am asking about the technical details which define those performance and efficiency figures.