What is the purpose of JMH @Fork?
Asked Answered
M

3

35

If IIUC each fork creates a separate virtual machine for the reason that each virtual machine instance might run with slight differences in JIT instructions?

I'm also curious about what the time attribute does in the below annotations:

@Warmup(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS)
@Measurement(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS)

TIA, Ole

Muscle answered 27/1, 2016 at 19:50 Comment(4)
I think that answers your question: #25573278Xenocrates
So IIUC, each fork runs in a separate VM. We do this because there might be subtle difference for each VM launched that will cause runtime to differ, hence we can account for this the variance calculation?Muscle
Yeah there might be differences between each run in the way the JIT behaves so forking allows to take account of that. When running benchmarks, I have seen extremely weird cases where one run would have a certain average and another run for the same test would have a very different one, "simply" because the JIT handled things differently.Xenocrates
Great - good to know - Thanks Tunaki. BTW - I think that answer is a lot clearer than the reference, in case you want to add it as an answer.Muscle
G
37

JMH offers the fork functionality for a few reasons. One is compilation profile separation as discussed by Rafael above. But this behaviour is not controlled by the @Forks annotation (unless you choose 0 forks, which means no subprocesses are forked to run benchmarks at all). You can choose to run all the benchmarks as part of your benchmark warmup (thus creating a mixed profile for the JIT to work with) by using the warmup mode control(-wm).

The reality is that many things can conspire to tilt your results one way or another and running any benchmark multiple times to establish run-to-run variance is an important practice which JMH supports (and most hand-rolled framework don't help with). Reasons for run to run variance might include (but I'm sure there's more):

  • CPU start at a certain C-state and scale up the frequency at the face of load, then overheat and scale it down. You can control this issue on certain OSs.

  • Memory alignment of your process can lead to paging behaviour differences.

  • Background application activity.
  • CPU allocation by the OS will vary resulting in different sets of CPUs used for each run.
  • Page cache contents and swapping
  • JIT compilation is triggered concurrently and may lead to different results (this will tend to happen when larger bits of code are under test). Note that small single threaded benchmarks will typically not have this issue.
  • GC behaviour can trigger with slightly different timings from run to run leading to different results.

Running your benchmark with at least a few forks will help shake out these differences and give you an idea of the run to run variance you see in your benchmark. I'd recommend you start with the default of 10 and cut it back (or increase it) experimentally depending on your benchmark.

Granule answered 2/2, 2016 at 7:14 Comment(0)
S
20

The JVM optimizes an application by creating a profile of the application's behavior. The fork is created to reset this profile. Otherwise, running:

benchmarkFoo();
benchmarkBar();

might result in different measurements than

benchmarkBar();
benchmarkFoo();

since the profile of the first benchmark influences the second.

The times determine the length of JMH spending for warming up or running the benchmark. If these times are to short, your VM might not be warmed up sufficiently or your result might have a too high standard deviation.

Sikora answered 28/1, 2016 at 4:37 Comment(0)
A
0

Update:

JMH (the Java Microbenchmark Harness),has been added to the JDK starting with JDK 12.

@Fork annotation, instructs how benchmark execution will happen the value parameter controls how many times the benchmark will be executed, and the warmup parameter controls how many times a benchmark will dry run before results are collected.

Example:

@Benchmark
@Fork(value = 1, warmups = 3)
@BenchmarkMode(Mode.AverageTime)
public void myMethod() {
    // Do nothing
}

This instructs JMH to run three warm-up forks and discard results before moving onto real timed benchmarking.

Adiaphorous answered 15/1, 2020 at 7:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.