How can I run JMH benchmarks inside my existing project using JUnit tests? The official documentation recommends making a separate project, using Maven shade plugin, and launching JMH inside the main
method. Is this necessary and why is it recommended?
I've been running JMH inside my existing Maven project using JUnit with no apparent ill effects. I cannot answer why the authors recommend doing things differently. I have not observed a difference in results. JMH launches a separate JVM to run benchmarks to isolate them. Here is what I do:
Add the JMH dependencies to your POM:
<dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-core</artifactId> <version>1.21</version> <scope>test</scope> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-generator-annprocess</artifactId> <version>1.21</version> <scope>test</scope> </dependency>
Note that I've placed them in scope
test
.In Eclipse, you may need to configure the annotation processor manually. NetBeans handles this automatically.
Create your JUnit and JMH class. I've chosen to combine both into a single class, but that is up to you. Notice that
OptionsBuilder.include
is what actually determines which benchmarks will be run from your JUnit test!import java.util.ArrayList; import java.util.List; import java.util.Random; import java.util.concurrent.TimeUnit; import org.junit.Test; import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.infra.Blackhole; import org.openjdk.jmh.runner.Runner; import org.openjdk.jmh.runner.options.*; public class TestBenchmark { @Test public void launchBenchmark() throws Exception { Options opt = new OptionsBuilder() // Specify which benchmarks to run. // You can be more specific if you'd like to run only one benchmark per test. .include(this.getClass().getName() + ".*") // Set the following options as needed .mode (Mode.AverageTime) .timeUnit(TimeUnit.MICROSECONDS) .warmupTime(TimeValue.seconds(1)) .warmupIterations(2) .measurementTime(TimeValue.seconds(1)) .measurementIterations(2) .threads(2) .forks(1) .shouldFailOnError(true) .shouldDoGC(true) //.jvmArgs("-XX:+UnlockDiagnosticVMOptions", "-XX:+PrintInlining") //.addProfiler(WinPerfAsmProfiler.class) .build(); new Runner(opt).run(); } // The JMH samples are the best documentation for how to use it // http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/ @State (Scope.Thread) public static class BenchmarkState { List<Integer> list; @Setup (Level.Trial) public void initialize() { Random rand = new Random(); list = new ArrayList<>(); for (int i = 0; i < 1000; i++) list.add (rand.nextInt()); } } @Benchmark public void benchmark1 (BenchmarkState state, Blackhole bh) { List<Integer> list = state.list; for (int i = 0; i < 1000; i++) bh.consume (list.get (i)); } }
JMH's annotation processor seems to not work well with compile-on-save in NetBeans. You may need to do a full
Clean and Build
whenever you modify the benchmarks. (Any suggestions appreciated!)Run your
launchBenchmark
test and watch the results!------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.Foo # JMH version: 1.21 # VM version: JDK 1.8.0_172, Java HotSpot(TM) 64-Bit Server VM, 25.172-b11 # VM invoker: /usr/lib/jvm/java-8-jdk/jre/bin/java # VM options: <none> # Warmup: 2 iterations, 1 s each # Measurement: 2 iterations, 1 s each # Timeout: 10 min per iteration # Threads: 2 threads, will synchronize iterations # Benchmark mode: Average time, time/op # Benchmark: com.Foo.benchmark1 # Run progress: 0.00% complete, ETA 00:00:04 # Fork: 1 of 1 # Warmup Iteration 1: 4.258 us/op # Warmup Iteration 2: 4.359 us/op Iteration 1: 4.121 us/op Iteration 2: 4.029 us/op Result "benchmark1": 4.075 us/op # Run complete. Total time: 00:00:06 REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial experiments, perform baseline and negative tests that provide experimental control, make sure the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts. Do not assume the numbers tell you what you want them to tell. Benchmark Mode Cnt Score Error Units Foo.benchmark1 avgt 2 4.075 us/op Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.013 sec
Runner.run
even returnsRunResult
objects on which you can do assertions, etc.
.addProfiler(StackProfiler.class)
like: ....[Thread state: RUNNABLE]........................................................................ 50.0% 50.0% java.net.SocketInputStream.socketRead0 21.5% 21.5% com.mycompany.myapp.MyProfiledClass.myMethod 9.4% 9.4% java.io.WinNTFileSystem.getBooleanAttributes 4.7% 4.7% java.util.zip.ZipFile.getEntry 3.0% 3.0% java.lang.String.regionMatches ... –
Hendrika A declarative approach using annotations:
@State(Scope.Benchmark)
@Threads(1)
public class TestBenchmark {
@Param({"10","100","1000"})
public int iterations;
@Setup(Level.Invocation)
public void setupInvokation() throws Exception {
// executed before each invocation of the benchmark
}
@Setup(Level.Iteration)
public void setupIteration() throws Exception {
// executed before each invocation of the iteration
}
@Benchmark
@BenchmarkMode(Mode.AverageTime)
@Fork(warmups = 1, value = 1)
@Warmup(batchSize = -1, iterations = 3, time = 10, timeUnit = TimeUnit.MILLISECONDS)
@Measurement(batchSize = -1, iterations = 10, time = 10, timeUnit = TimeUnit.MILLISECONDS)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
public void test() throws Exception {
Thread.sleep(ThreadLocalRandom.current().nextInt(0, iterations));
}
@Test
public void benchmark() throws Exception {
String[] argv = {};
org.openjdk.jmh.Main.main(argv);
}
}
@State(Scope.Benchmark)
@Threads(1)
@Fork(1)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@Warmup(iterations = 5, time = 1)
@Measurement(iterations = 5, time = 1)
@BenchmarkMode(Mode.All)
public class ToBytesTest {
public static void main(String[] args) {
ToBytesTest test = new ToBytesTest();
System.out.println(test.string()[0] == test.charBufferWrap()[0] && test.charBufferWrap()[0] == test.charBufferAllocate()[0]);
}
@Test
public void benchmark() throws Exception {
org.openjdk.jmh.Main.main(new String[]{ToBytesTest.class.getName()});
}
char[] chars = new char[]{'中', '国'};
@Benchmark
public byte[] string() {
return new String(chars).getBytes(StandardCharsets.UTF_8);
}
@Benchmark
public byte[] charBufferWrap() {
return StandardCharsets.UTF_8.encode(CharBuffer.wrap(chars)).array();
}
@Benchmark
public byte[] charBufferAllocate() {
CharBuffer cb = CharBuffer.allocate(chars.length).put(chars);
cb.flip();
return StandardCharsets.UTF_8.encode(cb).array();
}
}
You may write your own JUnit Runner to run benchmark. It allows you to run and debug benchmarks from Eclipse IDE
write class extending org.junit.runner.Runner class
public class BenchmarkRunner extends Runner { //... }
implement constructor and few methods
public class BenchmarkRunner extends Runner { public BenchmarkRunner(Class<?> benchmarkClass) { } public Description getDescription() { //... } public void run(RunNotifier notifier) { //... } }
add the runner to your test class
@RunWith(BenchmarkRunner.class) public class CustomCollectionBenchmark { //... }
I've described it detailed in my blog post: https://vbochenin.github.io/running-jmh-from-eclipse
© 2022 - 2024 — McMap. All rights reserved.