How to use java memory histogram "jmap"
Asked Answered
D

1

6

We have java streaming server running in production and it requires ~10GB of RAM for operation so we have 32GB installed. Memory is gradually increased until limit is reached and out-of-memory exceptions pop up.

I have trouble pinpointing what objects are accumulating over time because histogram and memory dump numbers do not match the system reported memory usage, namely java process occupies a little more than max 20GB (so out-of-memory exceptions are justified) but histogram and memory dump shows total of 6.4GB used.

process : 19.8G
Java
reported:  6.4G
---------------
unknown 
occupied
segment : 13.4G

How to get information about memory in unknown occupied segment that is not shown in histogram ?

I used jmap -J-d64 -histo <pid> > <file> command to generate histogram.

Process has following memory segments mapped, shown sorted by size

0x2DE000000: 13333.5MB
0x61F580000: 6666.5MB
0x7C0340000: 1020.8MB
0x7FBFF33C9000: 716.2MB
0x7FC086A75000: 196.9MB
0x7FB85C000000: 64.0MB
0x7FBAC0000000: 64.0MB
...

Total size of all java objects reported by jmap fits in 0x61F580000: 6666.5MB segment.

My guess is that larger segment the 0x2DE000000: 13333.5MB holds the leaked objects because histogram shows normal memory usage for this application.

Is there a way to find out what is occupying other memory not included in histogram ?

How can I detect If the closed-source part of server is using native extensions to allocate system memory instead of java memory ? In that case we would not see out-of-memory exceptions right ?

Here is htop output:

  Mem[|||||||||||||||||||||31670/31988MB]     Tasks: 87; 35 running
  Swp[||||||||||||||||||   16361/32579MB]     Load average: 39.33 36.00 34.72
                                              Uptime: 44 days, 15:08:19

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 3498 root       20   0 51.5G 19.8G  4516 S 151. 63.4     176h java -server -Xmx20000M -

Here is partial histogram output:

 num     #instances         #bytes  class name
----------------------------------------------
   1:       1134597     5834904800  [B
   2:        407694      144032664  [Ljava.lang.Object;
   3:       2018132      111547480  [C
   4:        100217       71842520  [I
   5:        581934       55865664  com.wowza.wms.httpstreamer.cupertinostreaming.livestreampacketizer.CupertinoRepeaterHolder
   6:        568535       36386240  com.wowza.wms.httpstreamer.cupertinostreaming.livestreampacketizer.CupertinoTSHolder
   7:        975220       23405280  java.lang.String
   8:        967713       23225112  com.wowza.wms.amf.AMFObjChunk
   9:        621660       14919840  com.wowza.wms.httpstreamer.cupertinostreaming.livestreampacketizer.LiveStreamingCupertinoBlock
  10:        369892       11836544  java.util.ArrayList$Itr
  11:        184502       11808128  com.wowza.wms.amf.AMFPacket
  12:        329055        7897320  java.util.ArrayList
  13:         55882        6705840  com.wowza.wms.server.RtmpRequestMessage
  14:        200263        6408416  java.util.HashMap$Node
  15:         86784        6248448  com.wowza.wms.httpstreamer.cupertinostreaming.livestreampacketizer.CupertinoPacketHolder
  16:         24815        5360040  com.wowza.wms.media.h264.H264CodecConfigInfo
  17:        209398        5025552  java.lang.StringBuilder
  18:        168061        4033464  com.wowza.util.PacketFragment
  19:        119160        3813120  java.util.concurrent.locks.AbstractQueuedSynchronizer$Node
  20:         93849        3753960  java.util.TreeMap$Entry
  21:        155756        3738144  java.lang.Long
  22:         55881        3576384  com.wowza.wms.server.RtmpResponseMessage
  23:         55760        3568640  com.wowza.util.FasterByteArrayOutputStream
  24:        130452        3130848  java.util.concurrent.LinkedBlockingQueue$Node
  25:         63172        3032256  java.util.HashMap
  26:         58747        2819856  java.nio.HeapByteBuffer
  27:         34830        2800568  [J
  28:         49076        2355648  java.util.TreeMap$AscendingSubMap
  29:         70567        2258144  com.wowza.wms.stream.livepacketizer.LiveStreamPacketizerBase$PacketizerEventHolder
  30:         55721        2228840  org.apache.mina.util.Queue
  31:         54990        2199600  java.util.HashMap$KeyIterator
  32:         58583        1874656  org.apache.mina.common.SimpleByteBufferAllocator$SimpleByteBuffer
  33:        112743        1803888  java.lang.Integer
  34:         55509        1776288  com.wowza.wms.server.ServerHandlerEvent
...
2089:             1             16  sun.util.resources.LocaleData$LocaleDataResourceBundleControl
Total      11078054     6454934408

The java version is:

# java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
Duclos answered 4/6, 2014 at 9:33 Comment(1)
This question is out of date unfortunately, but maybe someone else will need it. Somewhere there is a big direct buffer being allocated, see: java.nio.DirectByteBufferEpizootic
P
3

The memory is divided between the

  1. Heap space - Whats on the heap, reachable from the root
  2. Perm gen - the loaded classes etc. static variables also come here
  3. Stack space - temporary method level variables stored on the stack, thread locals

Depending on how the JMap is extracted, the perm gen might not be included. The stack space is never included. These might contribute to the extra memory that you are not seeing in your heap dump.

Check these probable causes -

  1. Are you using dynamically generated classes, causing perm gen to be really big?
  2. Are you using Thread Locals and storing a lot of data on them ?
  3. local method variables that might be continuously growing in size due to some reason?

Locally

If it reproduces locally, you can use JConsole to check all the memory space sizes to figure out when so much non heap space is being used.

Try loading the jmap into Memory Analyzer to check the leak suspects. It might show you any information you might be missing.

For further reading, also see these guides

Try the following settings on the production setup

  1. Increase the heap size by setting the Xmx
  2. Increase the permgenspace using -XX:MaxPermSize=256m
  3. set the following flags -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -Xloggc:/path/to/garbage.log
  4. Enable GC on perm gen

To diagnose the problem better, please provide

  1. The stack trace of the OutOfMemory error should give information about what is causing the issue.
  2. The JVM flags you are supplying at the start of the process.
Prompt answered 5/6, 2014 at 13:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.