What is Java's -XX:+UseMembar parameter
Asked Answered
O

4

13

I see this parameter in all kinds of places (forums, etc.) and the common answer it help highly concurrent servers. Still, I cannot find an official documentation from sun explaining what it does. Also, was it added in Java 6 or did it exist in Java 5?

(BTW, a good place for many hotspot VM parameters is this page)

Update: Java 5 does not boot with this parameter.

Oralee answered 13/7, 2009 at 15:16 Comment(2)
BTW:-XX options are not officially supported and can be removed from future releases without notification.Multiplechoice
@Rastislav true, but in many cases you need to use them...Oralee
K
13

In order to optimise performance, the JVM uses a "pseudo memory barrier" in code to act as a fencing instruction when synchronizing across multiple processors. It is possible to revert back to a "true" memory barrier instruction, but this can have a noticeable (and bad) effect upon performance.

The use of -XX:+UseMembar causes the VM to revert back to true memory barrier instructions. This parameter was originally intended to exist temporarily as a verification mechanism of the new pseudo-barrier logic, but it turned out that the new pseudo-memory barrier code introduced some synchronization issues. I believe these are now fixed, but until they were, the acceptable way to get around these issues was to use the reinstated flag.

The bug was introduced in 1.5, and I believe the flag exists in 1.5 and 1.6.

I've google-fu'ed this from a variety of mailing lists and JVM bugs:

Kennith answered 13/7, 2009 at 16:3 Comment(1)
Also see bugs.sun.com/bugdatabase/view_bug.do?bug_id=6822370 : on newer cpus and VMs with this bug you may have very odd synchronization issues (fixed in 6u18)Allrud
R
5

butterchicken only explains half of the story, I would like to augment more detail to kmatveev's answer. Yes, the option is for thread state changes, and (pseudo) memory barriers are used to ensure that the change is visible from other threads, especially VM thread. Thread states used in OpenJDK6 are as follows:

//  _thread_new         : Just started, but not executed init. code yet (most likely still in OS init code)
//  _thread_in_native   : In native code. This is a safepoint region, since all oops will be in jobject handles
//  _thread_in_vm       : Executing in the vm
//  _thread_in_Java     : Executing either interpreted or compiled Java code (or could be in a stub)
...
 _thread_blocked           = 10, // blocked in vm   

Without UseMembar option, in Linux, Hotspot uses memory serialize page instead of memory barrier instruction. Whenever a thread state transition happens, the thread writes to a memory address in memory serialize page with volatile pointer. When the VM thread needs to look at up-to-date state of all the threads, VM changes the protection bits for the memory serialize page to read only and then recovers it to read/write to serialize state changes. More detailed mechanism is introduced in the following page:

http://home.comcast.net/~pjbishop/Dave/Asymmetric-Dekker-Synchronization.txt

Retriever answered 15/6, 2011 at 19:8 Comment(0)
P
2

I don't agree with answer from butterchicken. This page http://www.md.pp.ru/~eu/jdk6options.html says that this flag causes memory barriers to be issued then thread changes it's state (from RUNNABLE to WAITING or to BLOCKED, for example).

Pristine answered 18/1, 2010 at 14:4 Comment(1)
It basically changes any thread state change behavior. So this goes in a wide variety of situations. ButterChicken was too narrow and specified one of the less common, and less intrusive uses of membar. The most intrusive is when serializing threads, and when changing thread state for immediate execution in any other manner, not when jumping CPUs (which is changing thread states, but would not be intrusive to issue a membar.)Refractor
R
2

UseMembar determines whether or not to use membar instructions in a strict manner, forcing all memory actions to complete before continuing.

It basically stops the processor's delayed memory handling optimizations from messing with the code is handled.

This generally slows things down, and isn't necessary on modern VMs for the vast majority of code.

Occasionally you run into issues where code SHOULD be thread-safe but isn't because the lack of membar instruction use. In these cases you can turn this on to get such code to work without switching to single-threading or messing around with the code's ordering to prevent the issue.

The JVM is generally good at detecting code that will cause problems and either insert a membar or do a JIT code rearrange optimization to provide time for the memory operations to complete. In fact, in my web search on the topic, I only found one example of the bug, and it was fixed in recent versions of both the Oracle and OpenJRE versions of the hotspot JVM.

As a note, for ARM architectures this option still defaults to on, because alternate optimizations (known as psuedo-membar optimizations) have not been applied yet, and thus it is subject to be very buggy without membar instructions.

Refractor answered 26/12, 2015 at 23:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.