Mongodb in Docker: numactl --interleave=all explanation
Asked Answered
I

1

10

I'm trying to create Dockerfile for in-memory MongoDB based on official repo at https://hub.docker.com/_/mongo/.

In dockerfile-entrypoint.sh I've encountered:

numa='numactl --interleave=all'
if $numa true &> /dev/null; then
    set -- $numa "$@"
fi

Basically it prepends numactl --interleave=all to original docker command, when numactl exists.

But I don't really understand this NUMA policy thing. Can you please explain what NUMA really means, and what --interleave=all stands for?

And why do we need to use it to create MongoDB instance?

Interscholastic answered 24/10, 2015 at 8:8 Comment(0)
F
17

The man page mentions:

The libnuma library offers a simple programming interface to the NUMA (Non Uniform Memory Access) policy supported by the Linux kernel. On a NUMA architecture some memory areas have different latency or bandwidth than others.

This isn't available for all architectures, which is why issue 14 made sure to call numa only on numa machine.

As explained in "Set default numa policy to “interleave” system wide":

It seems that most applications that recommend explicit numactl definition either make a libnuma library call or incorporate numactl in a wrapper script.


The interleave=all alleviates the kind of issue met by app like cassandra (a distributed database for managing large amounts of structured data across many commodity servers):

By default, Linux attempts to be smart about memory allocations such that data is close to the NUMA node on which it runs. For big database type of applications, this is not the best thing to do if the priority is to avoid disk I/O. In particular with Cassandra, we're heavily multi-threaded anyway and there is no particular reason to believe that one NUMA node is "better" than another.

Consequences of allocating unevenly among NUMA nodes can include excessive page cache eviction when the kernel tries to allocate memory - such as when restarting the JVM.

For more, see "The MySQL “swap insanity” problem and the effects of the NUMA architecture"

Without numa

In a NUMA-based system, where the memory is divided into multiple nodes, how the system should handle this is not necessarily straightforward.
The default behavior of the system is to allocate memory in the same node as a thread is scheduled to run on, and this works well for small amounts of memory, but when you want to allocate more than half of the system memory it’s no longer physically possible to even do it in a single NUMA node: In a two-node system, only 50% of the memory is in each node.

https://static.mcmap.net/file/mcmap/ZG-Ab5ovKRpjaRYlKnyx/blog/files/numa-imbalanced-allocation.png

With Numa:

An easy solution to this is to interleave the allocated memory. It is possible to do this using numactl as described above:

# numactl --interleave all command

I mentioned in the comments that numa enumerates the hardware to understand the physical layout. And then divides the processors (not cores) into “nodes”.
With modern PC processors, this means one node per physical processor, regardless of the number of cores present.

That is bit of an over-simplification, as Hristo Iliev points out:

AMD Opteron CPUs with larger number of cores are actually 2-way NUMA systems on their own with two HT (HyperTransport)-interconnected dies with own memory controllers in a single physical package.
Also, Intel Haswell-EP CPUs with 10 or more cores come with two cache-coherent ring networks and two memory controllers and can be operated in a cluster-on-die mode, which presents itself as a two-way NUMA system.

It is wiser to say that a NUMA node is some cores that can reach some memory directly without going through a HT, QPI (QuickPath_Interconnect), NUMAlink or some other interconnect.

https://static.mcmap.net/file/mcmap/ZG-Ab5ovKRpjaRYlKnyx/blog/files/numa-balanced-allocation.png

Fenugreek answered 24/10, 2015 at 9:34 Comment(4)
Is "node" here a physical memory device? Or some kind of logical division primitive that is existed only on NUMA systems?Interscholastic
@Interscholastic numa enumerates the hardware to understand the physical layout. And then divides the processors (not cores) into “nodes”. With modern PC processors, this means one node per physical processor, regardless of the number of cores present.Fenugreek
@VonC, your last comment is not entirely correct. AMD Opteron CPUs with larger number of cores are actually 2-way NUMA systems on their own with two HT-interconnected dies with own memory controllers in a single physical package. Also, Intel Haswell-EP CPUs with 10 or more cores come with two cache-coherent ring networks and two memory controllers and can be operated in a cluster-on-die mode, which presents itself as a two-way NUMA system. It is wiser to say that a NUMA node is some cores that can reach some memory directly without going through a HT, QPI, NUMAlink or some other interconnect.Amon
@HristoIliev Thank you for this comment (I have included it in the answer for more visibility, with some links to define HT and QPI). I agree my initial comment was based on an over-simplification of the CPU architecture.Fenugreek

© 2022 - 2024 — McMap. All rights reserved.