I'm running an SPDK experiment (which uses DPDK, which in turn uses hugepages) and it was working yesterday. I'm running them in a shared enviroment (I think one or two more people use this machine for other stuff). Now, whenever I try to run it, I get a no free hugepages error.
Output of /proc/meminfo is:
HugePages_Total: 1024
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Output of mount:
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
Something that worked on my last environment doesn't work anymore:
umount -a -t hugetlbfs
mount -t hugetlbfs nodev /mnt/huge
Then the output of /proc/meminfo is
HugePages_Total: 1024
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 1024
But if I try running it:
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No free hugepages reported in hugepages-2048kB
PANIC in rte_eal_init():
Cannot get hugepage information
Why are these pages surplus and not free? Is there any way I can free them? I want to restart the system since there might be other jobs running or people using it.
edit: Restarted the machine, allocated more hugepages and they were free. Executed the test, it crashed and now the hugepages are lost again.
Relevant questions with no working answer (at least for me):
How to release hugepages from the crashed application
How to really free hugepages in Linux for use by a new process?