Use of MPI_COMM_SELF
Asked Answered
U

2

9

I've discovered an MPI communicator called MPI_COMM_SELF. The problem is, I don't know, when is it useful. It appears to me, that just every process "thinks" about itself as root.

Could you explain me how does MPI_COMM_SELF exactly work and in which situations is it useful?

I've found this slide-show, but the communicator is only briefly mentioned there.


I've tried this "Hello, world" example and all processes returned 0 as their PID.

#include <mpi.h>
#include <stdio.h>

int main() {
    MPI_Init(NULL, NULL);

    int world_rank;
    MPI_Comm_rank(MPI_COMM_SELF, &world_rank);

    printf("Hello, my PID is %d!\n",
            world_rank);

    MPI_Finalize();
    return 0;
}
Unrepair answered 28/5, 2015 at 15:43 Comment(2)
I'm puzzled why you haven't just googled this. This page has a case where MPI_COMM_SELF is useful.Numskull
@aakashjain I've somehow missed it. But now I'm puzzled :-) Couldn't it be replaced just by if-elseif-else statement in this case? (reading/writing to multiple files at once)Unrepair
S
5

An MPI communicator is two pieces of information: a collection of processes, and a context for that collection. The defaulit communicators are MPI_COMM_WORLD -- every process -- and MPI_COMM_SELF -- just one process.

you can make more communicators with all, one, or some processes.

Why is context important? think libraries. A library using MPI could conflict with the client of that library, except a library will duplicate the communicator and thereby create a context where the library can communicate and never need to worry about what the client is doing.

MPI_COMM_SELF is a single process. If you call a collective routine, all processes in the communicator must participate.

MPI_COMM_SELF is particularly useful to MPI-IO routines, but only if you want "file per process". If you are sharing a file with multiple MPI processes (and you probably should do so), use a communicator encompassing those MPI processes.

Seabolt answered 28/5, 2015 at 20:14 Comment(0)
S
3

Besides the IO-related use of MPI_COMM_SELF, there are two more described in the MPI standard.

One particular use of MPI_COMM_SELF is to have user functions called during the finalisation of the MPI library, very similar to the atexit() mechanism in C. In fact, one cannot reliably use the atexit() mechanism in MPI programs since the MPI implementations are not required to return from MPI_Finalize() except in rank 0 and therefore another mechanism is needed (also, Fortran doesn't have an equivalent of atexit() at all). Fortunately, MPI provides a caching mechanism that allows portable association of arbitrary attributes to some MPI objects, namely communicators, windows and datatypes, which is mainly useful when writing portable libraries. Each attribute has a set of copy and delete callbacks that get called each time a certain event happens, for example when an attribute gets copied as a result of the duplication of a communicator. The standard does not give any guarantee on the order in which all MPI objects get destroyed during MPI_Finalize() but it guarantees that MPI_COMM_SELF is the first one to get destroyed. So attaching an attribute with a delete callback to MPI_COMM_SELF will trigger the callback right after the call to MPI_Finalize().

Another use of MPI_COMM_SELF is with the client/server mechanism of MPI. If you have an MPI job and want one of the ranks to receive a client connection from a separate MPI job, you must use MPI_COMM_SELF since MPI_Comm_accept() is a collective call.

Subterfuge answered 29/5, 2015 at 11:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.