shared memory, MPI and queuing systems
Asked Answered
H

8

22

My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right.

But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...). For example:

  • 5 Gb of static data, exact same thing loaded for each process
  • 4 Gb of data that can be distributed in MPI, the more CPUs are used, smaller this per-CPU RAM is.

On a 4 CPU job, this would mean at least a 20Gb RAM load, most of memory 'wasted', this is awful.

I'm thinking using shared memory to reduce the overall load, the "static" chunk would be loaded only once per computer.

So, main question is:

  • Is there any standard MPI way to share memory on a node? Some kind of readily available + free library ?

    • If not, I would use boost.interprocess and use MPI calls to distribute local shared memory identifiers.
    • The shared-memory would be read by a "local master" on each node, and shared read-only. No need for any kind of semaphore/synchronization, because it wont change.
  • Any performance hit or particular issues to be wary of?

    • (There wont be any "strings" or overly weird data structures, everything can be brought down to arrays and structure pointers)
  • The job will be executed in a PBS (or SGE) queuing system, in the case of a process unclean exit, I wonder if those will cleanup the node-specific shared memory.

Hearing answered 26/12, 2009 at 18:28 Comment(3)
After the answers so far, tests and further readings, memory mapped files might be the easiest option: - Only the master MPI process would need to "prepare" the memory file, that will be mapped by all processes. - Since the file will be read-only, no need to worry about content consistency. - No idea about performance tho... maybe only experiment will tell.Hearing
Performance is completely dependent on your platform. Your details are sparse, but given your available CPUs and RAM, you shouldn't have a huge problem. The only place where mmapped files fail you is if you need to change the shared memory (your distributed data), do not need the shared memory's contents to be persistent, and just need shared RAM. In that case, your system will waste a lot of time writing all your memory changes to disk.Silicon
Was away and could not choose on the final answer, the one with the most vote got it :) But anyway, lot of good answers all around, but nothing exactly answering what I was looking for, so I guess there is no widely-standard way to do this!Hearing
P
10

One increasingly common approach in High Performance Computing (HPC) is hybrid MPI/OpenMP programs. I.e. you have N MPI processes, and each MPI process has M threads. This approach maps well to clusters consisting of shared memory multiprocessor nodes.

Changing to such a hierarchical parallelization scheme obviously requires some more or less invasive changes, OTOH if done properly it can increase the performance and scalability of the code in addition to reducing memory consumption for replicated data.

Depending on the MPI implementation, you may or may not be able to make MPI calls from all threads. This is specified by the required and provided arguments to the MPI_Init_Thread() function that you must call instead of MPI_Init(). Possible values are

{ MPI_THREAD_SINGLE}
    Only one thread will execute. 
{ MPI_THREAD_FUNNELED}
    The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are ``funneled'' to the main thread). 
{ MPI_THREAD_SERIALIZED}
    The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are ``serialized''). 
{ MPI_THREAD_MULTIPLE}
    Multiple threads may call MPI, with no restrictions. 

In my experience, modern MPI implementations like Open MPI support the most flexible MPI_THREAD_MULTIPLE. If you use older MPI libraries, or some specialized architecture, you might be worse off.

Of course, you don't need to do your threading with OpenMP, that's just the most popular option in HPC. You could use e.g. the Boost threads library, the Intel TBB library, or straight pthreads or windows threads for that matter.

Poacher answered 6/1, 2010 at 0:52 Comment(3)
If you change your code to be multi-threaded on each shared-memory multiple-processor node, make sure to write your thread scheduling carefully to take cache locality and other memory architecture into account.Zennas
I'm not sure that the hybrid approach is increasingly common. Here's one example of the evidence that it may not be an approach worth taking -- pdc.kth.se/education/historical/2008/PRACE-P2S2/coursework/… Yes, it's a nice concept, but in practice of dubious value compared to the effort required to modify your application.Abigail
this answer does not address any of the issues in the questionCarpathoukraine
R
8

I haven't worked with MPI, but if it's like other IPC libraries I've seen that hide whether other threads/processes/whatever are on the same or different machines, then it won't be able to guarantee shared memory. Yes, it could handle shared memory between two nodes on the same machine, if that machine provided shared memory itself. But trying to share memory between nodes on different machines would be very difficult at best, due to the complex coherency issues raised. I'd expect it to simply be unimplemented.

In all practicality, if you need to share memory between nodes, your best bet is to do that outside MPI. i don't think you need to use boost.interprocess-style shared memory, since you aren't describing a situation where the different nodes are making fine-grained changes to the shared memory; it's either read-only or partitioned.

John's and deus's answers cover how to map in a file, which is definitely what you want to do for the 5 Gb (gigabit?) static data. The per-CPU data sounds like the same thing, and you just need to send a message to each node telling it what part of the file it should grab. The OS should take care of mapping virtual memory to physical memory to the files.

As for cleanup... I would assume it doesn't do any cleanup of shared memory, but mmaped files should be cleaned up since files are closed (which should release their memory mappings) when a process is cleaned up. I have no idea what caveats CreateFileMapping etc. have.

Actual "shared memory" (i.e. boost.interprocess) is not cleaned up when a process dies. If possible, I'd recommend trying killing a process and seeing what is left behind.

Rein answered 27/12, 2009 at 7:21 Comment(0)
A
2

With MPI-2 you have RMA (remote memory access) via functions such as MPI_Put and MPI_Get. Using these features, if your MPI installation supports them, would certainly help you reduce the total memory consumption of your program. The cost is added complexity in coding but that's part of the fun of parallel programming. Then again, it does keep you in the domain of MPI.

Abigail answered 29/12, 2009 at 10:5 Comment(3)
Wouldn't that enormously increase the latency of accesses to shared memory? Or is MPI_Get just an alias for a direct fetch across the memory bus?Edelstein
@Edelstein Yes, MPI-2 RMA is not really any faster than the traditional Send/Recv. In many cases slower, due to the need to register memory windows. In principle, in the future with special network hardware support it might get faster, but today there is little reason to use it.Poacher
Yes indeed. But perhaps a reason to use MPI2 RMA is to do shared memory programming within the MPI paradigm, without having to recourse to lower-level features such as memory-mapped files or IPC libraries. The cost of marginally better execution performance may well be much lower development performance. I wonder what the OP is making of all this.Abigail
B
2

MPI-3 offers shared memory windows (see e.g. MPI_Win_allocate_shared()), which allows usage of on-node shared memory without any additional dependencies.

Bergstein answered 26/3, 2018 at 6:50 Comment(1)
It's interesting to read the other answer, all dating to 2009, and see what hoops people had to jump through prior to MPI 3 in 2012.Publication
C
0

I don't know much about unix, and I don't know what MPI is. But in Windows, what you are describing is an exact match for a file mapping object.

If this data is imbedded in your .EXE or a .DLL that it loads, then it will automatically be shared between all processes. Teardown of your process, even as a result of a crash will not cause any leaks or unreleased locks of your data. however a 9Gb .dll sounds a bit iffy. So this probably doesn't work for you.

However, you could put your data into a file, then CreateFileMapping and MapViewOfFile on it. The mapping can be readonly, and you can map all or part of the file into memory. All processes will share pages that are mapped the same underlying CreateFileMapping object. it's good practice to close unmap views and close handles, but if you don't the OS will do it for you on teardown.

Note that unless you are running x64, you won't be able to map a 5Gb file into a single view (or even a 2Gb file, 1Gb might work). But given that you are talking about having this already working, I'm guessing that you are already x64 only.

Carmencita answered 26/12, 2009 at 21:27 Comment(2)
From the documentation, I infer boost.interprocess permits to do this, in a cross-platform way (no need to #ifdef) and with "clean" code. And there is a windows-specific option permitting exactly what you describe. But the meat of the problem here is not the technical implementation of the shared-memory system, but how to do this cleanly when you have 128 instances of your applications distributed on 8-core machines :-)Hearing
I'm not sure why that would be a problem. Are you saying that you want to share across multiple machines. I'm pretty sure each machine is going to see only it's own RAM, and that all cores on a machine share a view of that machines' RAM.Carmencita
H
0

If you store your static data in a file, you can use mmap on unix to get random access to the data. Data will be paged in as and when you need access to a particular bit of the data. All that you will need to do is overlay any binary structures over the file data. This is the unix equivalent of CreateFileMapping and MapViewOfFile mentioned above.

Incidentally glibc uses mmap when one calls malloc to request more than a page of data.

Heres answered 26/12, 2009 at 22:50 Comment(1)
The glibc malloc mmap threshold is by default 128 kB, which is not the same size as a page.Poacher
C
0

I had some projects with MPI in SHUT.

As i know , there are many ways to distribute a problem using MPI, maybe you can find another solution that does not required share memory, my project was solving an 7,000,000 equation and 7,000,000 variable

if you can explain your problem,i would try to help you

Cirrus answered 28/12, 2009 at 19:28 Comment(2)
For sure, the "static" part of the problem could be parallelized better, but the development time would be huge. Most of the memory of the "full" problem is possible to load once on each compute node. So, I'm aiming for shared memory, and aiming the best technique to do so!Hearing
What I would like to know is what kind of problem you were solving that had 7*10^6 variables.Tidwell
P
0

I ran into this problem in the small when I used MPI a few years ago.

I am not certain that the SGE understands memory mapped files. If you are distributing against a beowulf cluster, I suspect you're going to have coherency issues. Could you discuss a little about your multiprocessor architecture?

My draft approach would be to set up an architecture where each part of the data is owned by a defined CPU. There would be two threads: one thread being an MPI two-way talker and one thread for computing the result. Note that MPI and threads don't always play well together.

Puffball answered 6/1, 2010 at 1:15 Comment(2)
Yes, data owned only by one CPU, and read-only. No problem of coherency here. So, memory mapped file might be a easy option.Hearing
Agreed. But that's going to depend on your architecture. memmapped files are best in a shared-memory architecture. I'm not sure how you'd do it with a beowulf cluster.Puffball

© 2022 - 2024 — McMap. All rights reserved.