Lost memory on Linux - not cached, not buffers
Asked Answered
D

1

14

My Ubuntu 12 server is mysteriously losing/wasting memory. It has 64GB of ram. About 46GB are shown as used even when I shutdown all my applications. This memory is not reported as used for buffers or caching.

The result of top (while my apps are running; the apps use about 9G):

top - 21:22:48 up 46 days, 10:12,  1 user,  load average: 0.01, 0.09, 0.12
Tasks: 635 total,   1 running, 633 sleeping,   1 stopped,   0 zombie
Cpu(s):  0.2%us,  0.2%sy,  0.0%ni, 99.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  65960100k total, 55038076k used, 10922024k free,   271700k buffers
Swap:        0k total,        0k used,        0k free,  4860768k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                         
  5303 1002      20   0 26.2g 1.2g  12m S    0  1.8   2:08.21 java                                                                                                                                             
  5263 1003      20   0  9.8g 995m 4544 S    0  1.5   0:19.82 mysqld                                                                                                                                           
  7021 www-data  20   0 3780m  18m 2460 S    0  0.0   8:37.50 apache2                                                                                                                                          
  7022 www-data  20   0 3780m  18m 2540 S    0  0.0   8:38.28 apache2      
  .... (smaller processes)

Note that top reports 4.8G for cached, not 48G, and it's 55G that are used. The result of free -m:

             total       used       free     shared    buffers     cached
Mem:         64414      53747      10666          0        265       4746
-/+ buffers/cache:      48735      15678
Swap:            0          0          0

What is using my memory? I've tried every diagnostic that I could come across. Forums are swamped with people asking the same question because Linux is using their ram for buffers/cache. This doesn't seem to be what is going on here.

It might be relevant that the system is a host for lxc containers. The top and free results reported above are from the host, but similar memory usage is reported within the containers. Stopping all containers does not free up the memory. Some 46G remain in use. However, if I restart the host the memory is free. It doesn't reach the 46G before a while. (I don't know if it takes days or weeks. It takes more than a few hours.)

It might also be relevant that the system is using zfs. Zfs is reputed memory-hungry, but not that much. This system has two zfs filesystems on two raidz pools, one of 1.5T and one of 200G. I have another server that exhibits exactly the same problem (46G used by nothing) and is configured pretty much identically, but with a 3T array instead of 1.5T. I have lots of snapshots (100 or so) for each zfs filesystem. I normally have one snapshot of each filesystem mounted at any time. Unmounting those does not give me back my memory.

I can see that the VIRT numbers in the screenshot above coincide roughly with the memory used, but the memory remains used even after I shutdown these apps--even after I shutdown the container that's running them.

EDIT: I tried adding some swap, and something interesting happened. I added 30G of swap. Moments later, the amount of memory marked as cached in top had increased from 5G to 25G. Free -m indicated about 20G more usable memory. I added another 10G of swap, and cached memory raised to 33G. If I add another 10G of swap, I get 6G more recognized as cached. All this time, only a few kilobytes of swap are reported used. It's as if the kernel needed to have matching swap for every bit that it recognizes or reports as cached. Here is the output of top with 40G of swap:

top - 23:06:45 up 46 days, 11:56,  2 users,  load average: 0.01, 0.12, 0.13
Tasks: 586 total,   1 running, 585 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  65960100k total, 64356228k used,  1603872k free,   197800k buffers
Swap: 39062488k total,     3128k used, 39059360k free, 33101572k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                         
 6440 1002      20   0 26.3g 1.5g  11m S    0  2.4   2:02.87 java                                                                                                                                             
 6538 1003      20   0  9.8g 994m 4564 S    0  1.5   0:17.70 mysqld                                                                                                                                           
 4707 dbourget  20   0 27472 8728 1692 S    0  0.0   0:00.38 bash      

Any suggestions highly appreciated.

EDIT 2: Here are the arc* values from /proc/spl/kstat/zfs/arcstats

arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    1531800648
arc_meta_limit                  4    8654946304
arc_meta_max                    4    8661962768

There is no L2ARC activated for ZFS

Dallasdalli answered 15/9, 2013 at 1:50 Comment(1)
Have you figured this out? I have no clues but I'd like to know what causes it!Redeploy
M
15

This memory is very likely used by the ZFS ARC cache and other ZFS related data stored in the kernel memory. The ARC cache is somewhat similar to the buffer cache so there is generally nothing to worry about it as this memory is released by ZFS should there is demand to it.

However, there is a subtle difference between buffer cache memory and ARC cache one. The first one is immediately available to allocation while the ARC cache one is not. ZFS monitors the free RAM available and when too low, it releases RAM to other consumers.

This works fine with most applications but a minority of them are either confused when a low amount of available RAM is reported, or allocate too much/too fast memory for the release process to keep up the pace properly.

That's the reason why ZFS allows to reduce the maximum size the ARC size is allowed to use. This setting is done in the /etc/modprobe.d/zfs.conf file.

For example, should you want the ARC never to exceed 32 GB, add this line:

options zfs zfs_arc_max=34359738368

To get the current ARC size and various other ARC statistics, run this command:

cat /proc/spl/kstat/zfs/arcstats

The size metric will show the current size of the ARC. Beware that other ZFS related memory areas might also take a share of RAM and won't be necessarily quickly released even when no more used. Finally, ZFS on linux is certainly less mature than the Solaris native implementation so you might be hit by a bug like this one.

Note too that due to the share storage pool design, unmounting a ZFS file system won't free any resource. You would need to export a pool for memory to be eventually released.

Mastitis answered 15/9, 2013 at 2:19 Comment(4)
jilliagre: I should have said that I've experienced out of memory errors (which is what drew my attention to this). So either ZFS is ill-behaved or this is not it. I can't find how to check the amount of memory used by ZFS. There is no obviously relevant line in the output of memstat.Dallasdalli
Thanks. I'm not sure how to read the arcstats values. Assuming they are expressed in bytes, it looks like the ARC is only using 1.5G. See EDIT 2.Dallasdalli
Please post c_min, c_max and size values along with the settings present in zfs.conf (if any).Mastitis
On Ubuntu 16.04 I needed to run update-initramfs -u -k all before reboot to have this settings from /etc/modprobe.d/zfs.conf propagated.Muezzin

© 2022 - 2024 — McMap. All rights reserved.