Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
Xms
would initialize and then throw an OOME
whereas starting with a big Xms
would just fail to initialize the VM, which is hardly better. –
Giulio Update: This answer was originally written in 2014 and is obsolete.
Peter 's answer is correct in that -Xms
is allocated at startup and it will grow up to -Xmx
(max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98%
of time is spent collecting and the collections cannot recover more than 2%
of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError
on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError
.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung
collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew
and PSOldGen
and CMS Tenured
implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure
. In the case of a CMF
the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
Xms
would initialize and then throw an OOME
whereas starting with a big Xms
would just fail to initialize the VM, which is hardly better. –
Giulio AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx
and Xms
to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes
for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
Xms
is not a limitation on how much the JVM has at launch. As an aside, language like "brain dead" isn't helpful. –
Merline Updated answer as of 2023, valid from JVM 7- to JVM 20+.
You should always set Xms=Xmx for server applications that use multiple GB of memory.
1) It's necessary to avoid heap resize. Resizing is a very slow and very intensive operation that should be avoided.
A resize requires to reallocate large amount of memory and move all existing java objects. It can take a while and the application is frozen during that time. Having a small Xms and large Xmx will lead to multiple heap resize. Some JVM versions and garbage collectors may not move all objects or pause all threads at once, but the principle remains.
Consider a typical ElasticSearch database with 30 GB heap. Starting the heap at 1GB and reaching the full size will require a dozen of resize operations. The larger resizes will take entire seconds (freeze) to shuffle millions of objects across GB of memory. Bonus points, the ElasticSearch cluster is monitoring when a node is unresponsive for N seconds and dropping it.
It's critical to set Xms=Xms. Most server software have relatively large heap and are sensitive to pauses (latency).
2) The JVM might never grow the heap.
The JVM will decide to do a full GC or to resize the heap when there is memory pressure. It's going to free objects if possible and not have to grow the heap. In my experience the JVM is trying to limit heap size (this might vary with the JVM).
It makes sense to minimize heap size in a system where the memory is limited and there are other applications to run (e.g. typical desktop). It doesn't make sense to minimize heap size in a system with dedicated resources (e.g. typical server).
Consider a web server, each incoming request will churn through some kB or MB of heap, the web server will run a GC every now and then to reclaim memory:
- The web server could run with 128 MB of heap (let's say 50 MB baseline constantly used) and run a slow GC every 78 small requests
- Or the web server could run with 1024 MB of heap and process a thousand requests between GC. HINT: this is more stable and responsive
You might expect the JVM to grow the heap in the first scenario, but it doesn't have to. The application can run with little memory. The JVM simply needs to collect unused objects more frequently because there is less room.
3) Variable heap size is in opposition with caching and buffering.
Server software can do heavy caching and buffering and queuing to optimize performance and reduce latency. All these patterns expect to have memory available upfront.
Usually they need to be configured with a fixed size or a percentage of the heap.
Consider a logstash relay configured to use 20% of memory for buffering incoming log messages by default.
What happens when the heap size is variable? Maybe logstash has 10 MB of buffer, maybe logstash has 1 GB? Is the buffer adjusted when the heap is resized?
It doesn't make any sense to use a variable heap size here, Xms should be the same as Xmx.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427 the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx
© 2022 - 2024 — McMap. All rights reserved.