How to run JMH from inside JUnit tests?
Asked Answered
H

4

43

How can I run JMH benchmarks inside my existing project using JUnit tests? The official documentation recommends making a separate project, using Maven shade plugin, and launching JMH inside the main method. Is this necessary and why is it recommended?

Hoplite answered 27/5, 2015 at 14:45 Comment(0)
H
75

I've been running JMH inside my existing Maven project using JUnit with no apparent ill effects. I cannot answer why the authors recommend doing things differently. I have not observed a difference in results. JMH launches a separate JVM to run benchmarks to isolate them. Here is what I do:

  • Add the JMH dependencies to your POM:

    <dependency>
      <groupId>org.openjdk.jmh</groupId>
      <artifactId>jmh-core</artifactId>
      <version>1.21</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.openjdk.jmh</groupId>
      <artifactId>jmh-generator-annprocess</artifactId>
      <version>1.21</version>
      <scope>test</scope>
    </dependency>
    

    Note that I've placed them in scope test.

    In Eclipse, you may need to configure the annotation processor manually. NetBeans handles this automatically.

  • Create your JUnit and JMH class. I've chosen to combine both into a single class, but that is up to you. Notice that OptionsBuilder.include is what actually determines which benchmarks will be run from your JUnit test!

    import java.util.ArrayList;
    import java.util.List;
    import java.util.Random;
    import java.util.concurrent.TimeUnit;
    import org.junit.Test;
    import org.openjdk.jmh.annotations.*;
    import org.openjdk.jmh.infra.Blackhole;
    import org.openjdk.jmh.runner.Runner;
    import org.openjdk.jmh.runner.options.*;
    
    
    public class TestBenchmark 
    {
    
          @Test public void 
        launchBenchmark() throws Exception {
    
                Options opt = new OptionsBuilder()
                    // Specify which benchmarks to run. 
                    // You can be more specific if you'd like to run only one benchmark per test.
                    .include(this.getClass().getName() + ".*")
                    // Set the following options as needed
                    .mode (Mode.AverageTime)
                    .timeUnit(TimeUnit.MICROSECONDS)
                    .warmupTime(TimeValue.seconds(1))
                    .warmupIterations(2)
                    .measurementTime(TimeValue.seconds(1))
                    .measurementIterations(2)
                    .threads(2)
                    .forks(1)
                    .shouldFailOnError(true)
                    .shouldDoGC(true)
                    //.jvmArgs("-XX:+UnlockDiagnosticVMOptions", "-XX:+PrintInlining")
                    //.addProfiler(WinPerfAsmProfiler.class)
                    .build();
    
                new Runner(opt).run();
            }
    
        // The JMH samples are the best documentation for how to use it
        // http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/
        @State (Scope.Thread)
        public static class BenchmarkState
        {
            List<Integer> list;
    
              @Setup (Level.Trial) public void
            initialize() {
    
                    Random rand = new Random();
    
                    list = new ArrayList<>();
                    for (int i = 0; i < 1000; i++)
                        list.add (rand.nextInt());
                }
        }
    
          @Benchmark public void 
        benchmark1 (BenchmarkState state, Blackhole bh) {
    
                List<Integer> list = state.list;
    
                for (int i = 0; i < 1000; i++)
                    bh.consume (list.get (i));
            }
    }
    
  • JMH's annotation processor seems to not work well with compile-on-save in NetBeans. You may need to do a full Clean and Build whenever you modify the benchmarks. (Any suggestions appreciated!)

  • Run your launchBenchmark test and watch the results!

    -------------------------------------------------------
     T E S T S
    -------------------------------------------------------
    Running com.Foo
    # JMH version: 1.21
    # VM version: JDK 1.8.0_172, Java HotSpot(TM) 64-Bit Server VM, 25.172-b11
    # VM invoker: /usr/lib/jvm/java-8-jdk/jre/bin/java
    # VM options: <none>
    # Warmup: 2 iterations, 1 s each
    # Measurement: 2 iterations, 1 s each
    # Timeout: 10 min per iteration
    # Threads: 2 threads, will synchronize iterations
    # Benchmark mode: Average time, time/op
    # Benchmark: com.Foo.benchmark1
    
    # Run progress: 0.00% complete, ETA 00:00:04
    # Fork: 1 of 1
    # Warmup Iteration   1: 4.258 us/op
    # Warmup Iteration   2: 4.359 us/op
    Iteration   1: 4.121 us/op
    Iteration   2: 4.029 us/op
    
    
    Result "benchmark1":
      4.075 us/op
    
    
    # Run complete. Total time: 00:00:06
    
    REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
    why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
    experiments, perform baseline and negative tests that provide experimental control, make sure
    the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
    Do not assume the numbers tell you what you want them to tell.
    
    Benchmark                                Mode  Cnt  Score   Error  Units
    Foo.benchmark1                           avgt    2  4.075          us/op
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.013 sec
    
  • Runner.run even returns RunResult objects on which you can do assertions, etc.

Hoplite answered 27/5, 2015 at 14:59 Comment(10)
This is not a recommended option to run tests under JMH. Unit-tests and other IDE interferes with the measurements. Do it right from the command line.Untinged
@IvanVoroshilin I've tried it both ways and did not see a difference in results. Do you have concrete advice under what conditions this becomes a problem?Hoplite
The results are less reliable, it is just a recommendation. Eliminate the external factors. This gets in the way when it comes to microbenchmarking.Untinged
@IvanVoroshilin Sounds like FUD spread by people who hate IDEs (I am referring to some of the core JVM developers, who also develop JMH). If we want to split hairs, we should also advise people to shutdown the window manager, stop all daemons, etc, etc. In practice, warming up and averaging over several iterations smooths out most timing noise.Hoplite
Forking should negate most possible side effects.Hindu
If only we had a benchmarking framework to measure the differences... ;)Herwick
@AleksandrDubinsky are you sure that LookUtils belongs to standard libs or to those JMH dependencies?Reeve
@Reeve I don't see a reference to "LookUtils" anywhere on this page.Hoplite
@AleksandrDubinsky weird! I probably got access to a cached version, then! Thanks!Reeve
@AleksandrDubinsky I'd suggest to add the StackProfiler which prints very useful profiling results at the end: .addProfiler(StackProfiler.class) like: ....[Thread state: RUNNABLE]........................................................................ 50.0% 50.0% java.net.SocketInputStream.socketRead0 21.5% 21.5% com.mycompany.myapp.MyProfiledClass.myMethod 9.4% 9.4% java.io.WinNTFileSystem.getBooleanAttributes 4.7% 4.7% java.util.zip.ZipFile.getEntry 3.0% 3.0% java.lang.String.regionMatches ...Hendrika
M
6

A declarative approach using annotations:

@State(Scope.Benchmark)
@Threads(1)
public class TestBenchmark {
       
    @Param({"10","100","1000"})
    public int iterations;


    @Setup(Level.Invocation)
    public void setupInvokation() throws Exception {
        // executed before each invocation of the benchmark
    }

    @Setup(Level.Iteration)
    public void setupIteration() throws Exception {
        // executed before each invocation of the iteration
    }

    @Benchmark
    @BenchmarkMode(Mode.AverageTime)
    @Fork(warmups = 1, value = 1)
    @Warmup(batchSize = -1, iterations = 3, time = 10, timeUnit = TimeUnit.MILLISECONDS)
    @Measurement(batchSize = -1, iterations = 10, time = 10, timeUnit = TimeUnit.MILLISECONDS)
    @OutputTimeUnit(TimeUnit.MILLISECONDS)
    public void test() throws Exception {
       Thread.sleep(ThreadLocalRandom.current().nextInt(0, iterations));
    }


    @Test
    public void benchmark() throws Exception {
        String[] argv = {};
        org.openjdk.jmh.Main.main(argv);
    }

}
Mosa answered 21/12, 2019 at 15:3 Comment(7)
Code-only answers are frowned-upon. How is this solution different and/or better than the existing answer? How does calling jmh.Main cause the correct tests to be run?Hoplite
This is other simplified approach. That's all.Mosa
I wasn't trying to criticize. I was listing questions that you should answer in the text of your post. It is bad to post some code without explanation.Hoplite
The difference is more or less obvious!?: The code obove provides the test-setup as annotations - the other one is a programmatic approach. Both have in common that JUnit is just used to start JMH. It´s a personal preference - I prefer the annotation approach.Recusancy
Thanks, although I do get a message about "Unable to find the resource: /META-INF/BenchmarkList"Jetton
+1 Benchmark annotation-config is shared when running from Junit, build plugin or command-line. This supports running quick benchmarks from IDE (via Junit) and formal ones from build environmentEthiopian
To make it more obvious what is happening and make it more similar to the above answer, the benchmark method should probably be converted to this: @Test public void benchmark() throws Exception { Options opt = new OptionsBuilder() .include(TestBenchmark.class.getSimpleName()) .build(); //how to run benchmark and collect results Collection<RunResult> runResults = new Runner(opt).run(); }Boyett
P
0
@State(Scope.Benchmark)
@Threads(1)
@Fork(1)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@Warmup(iterations = 5, time = 1)
@Measurement(iterations = 5, time = 1)
@BenchmarkMode(Mode.All)
public class ToBytesTest {

  public static void main(String[] args) {
    ToBytesTest test = new ToBytesTest();
    System.out.println(test.string()[0] == test.charBufferWrap()[0] && test.charBufferWrap()[0] == test.charBufferAllocate()[0]);
  }

  @Test
  public void benchmark() throws Exception {
    org.openjdk.jmh.Main.main(new String[]{ToBytesTest.class.getName()});
  }

  char[] chars = new char[]{'中', '国'};

  @Benchmark
  public byte[] string() {
    return new String(chars).getBytes(StandardCharsets.UTF_8);
  }

  @Benchmark
  public byte[] charBufferWrap() {
    return StandardCharsets.UTF_8.encode(CharBuffer.wrap(chars)).array();
  }

  @Benchmark
  public byte[] charBufferAllocate() {
    CharBuffer cb = CharBuffer.allocate(chars.length).put(chars);
    cb.flip();
    return StandardCharsets.UTF_8.encode(cb).array();
  }
}
Pede answered 21/3, 2022 at 1:40 Comment(2)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Tortfeasor
Code-only answers are frowned upon. At the least, please explain how your answer is different from other, similar answers.Hoplite
A
0

You may write your own JUnit Runner to run benchmark. It allows you to run and debug benchmarks from Eclipse IDE

  1. write class extending org.junit.runner.Runner class

    public class BenchmarkRunner extends Runner {
      //...
    }
    
  2. implement constructor and few methods

    public class BenchmarkRunner extends Runner {
       public BenchmarkRunner(Class<?> benchmarkClass) {
       }
    
       public Description getDescription() {
        //...
       }  
    
       public void run(RunNotifier notifier) {
        //...
       }
    }
    
  3. add the runner to your test class

    @RunWith(BenchmarkRunner.class)  
    public class CustomCollectionBenchmark {
        //...
    }  
    

I've described it detailed in my blog post: https://vbochenin.github.io/running-jmh-from-eclipse

Aggrade answered 1/3, 2023 at 22:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.