Does somebody know what is the memory overhead of a ConcurrentHashMap (compared to a "classical" HashMap) ?
- At construction ?
- At insertion of an element ?
Does somebody know what is the memory overhead of a ConcurrentHashMap (compared to a "classical" HashMap) ?
If you run the following with -XX:-UseTLAB -XX:NewSize=900m -mx1g
on a 64-bit JVM.
public static void main(String... args) throws NoSuchMethodException, IllegalAccessException {
for (int i = 0; i < 4; i++) {
long used1 = usedMemory();
populate(new HashMap());
long used2 = usedMemory();
populate(new ConcurrentHashMap());
long used3 = usedMemory();
System.out.println("The ratio of used memory is " + (double) (used3 - used2) / (used2 - used1));
System.out.println("For an extra " + ((used3 - used2) - (used2 - used1)) / 1000000 + " bytes per entry was used.");
}
}
private static void populate(Map map) {
for (Integer i = 0; i < 1000000; i++)
map.put(i, i);
}
private static long usedMemory() {
return Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
}
you get with Java 6 and 7 for one million entries.
The ratio of used memory is 1.1291128466982379
For an extra 8 bytes per entry was used.
The ratio of used memory is 1.1292086928728067
For an extra 8 bytes per entry was used.
The ratio of used memory is 1.1292086928728067
For an extra 8 bytes per entry was used.
The ratio of used memory is 1.1292086928728067
For an extra 8 bytes per entry was used.
Eight MB of memory costs around 5 cents.
freeMemory()
less accurate. –
Sadonia ConcurrentHashMap
doesn't use significantly more memory than HashMap
, both at construction and at insertion.
At Intialization
ConcurrentHashMap
uses almost the same amount of memory as a HashMap, may be slightly more for couple of extra book-keeping variables and locks.
During initialization, ConcurrentHashMap
creates 16 Segments to store key-values, each Segment is equivalent to a HashMap.
The intial capacity/size of each Segment is 1/16 of the overall initial capacity. So in essence, ConcurrentHashMap
creates 16 small HashMaps equivalent to one HashMap. Each Segment has own lock and couple of book-keeping variables (count, threshold etc), this is extra memory overhead.
You can control the number of Segments created by ConcurrentHashMap
by passing appropriate value to concurrencyLevel parameter to the ConcurrentHashMap
. Smaller this value, then less space will be used but more contention when high number of threads update the Map. Higher this value, then more Segments will be created but the performance of parallel updates will be faster. Note: Significantly higher value for concurrencyLevel parameter, affects both space and time.
This small overhead in memory is what a developer is willing to accept in exchange for concurrency.
At Insertion
When Segments get filled, the size of that Segment will be increased. The policy to increase the size is same as the HashMap. loadfactor parameter decides when to increase the size of the Segment. Note only that Segment which is filled will be increased. Once again, memory overhead is almost same as HashMap.
Overall, ConcurrentHashMap
doesn't use significantly more memory than HashMap
, but its really hard to measure each and every extra byte used by ConcurrentHashMap
.
I don't really understand the premise of the question - either you need the concurrency or you do not.
However, according to this link, the memory footprint of an empty ConcurrentHashMap
is 1700 bytes. It recommends that you use the ConcurrentHashMap
if you have multiple threads that need read/write access, but a Hashtable
if you have many threads that need read access but one with write.
© 2022 - 2024 — McMap. All rights reserved.