In MPI, how to make the following program Wait till all calculations are completed
Asked Answered
W

1

8

I am new to MPI and this program has been written using C language. In the following program, I am asking other processors to print messages. But I want to print the "END" message at process/rank 0 after all these other processes have been completed. To run this program I am using 4 processors and following commands mpicc file.c -o objfile and mpirun -np 4 objfile
Please show me with an example if possible.

#include <mpi.h>
#include <stdio.h>
#include <unistd.h>

int main(int argc, char** argv) 
{
    MPI_Init(&argc, &argv);
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int i;
    double centroid[3];/*ignore this array*/

    if (world_rank == 0) 
    {
        int destination;
        for (i=0; i<3; i++)
        {
            /*Ignore centroid buffer been sent for now*/

            destination = i+1;/*destination rank or process*/
            MPI_Send(&centroid, 3, MPI_DOUBLE, destination, 0, MPI_COMM_WORLD);
        }

        printf("\nEND: This need to print after all MPI_Send/MPI_Recv has been completed\n\n");
    } 
    else
    {   
        MPI_Recv(&centroid, 3, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);        
        sleep(1); /*This represent many calculations that will happen here later, instead of sleep*/
        printf("Printing at Rank/Process number: %d\n", world_rank);
    }


    MPI_Finalize();
    return 0;
}

Result:

END: This need to print after all MPI_Send/MPI_Recv has been completed Printing at Rank/Process number: 2 Printing at Rank/Process number: 3 Printing at Rank/Process number: 1

Please modify this code or show me with an example of how to wait till all these other processors are done

Winton answered 20/9, 2016 at 23:40 Comment(0)
A
8

Well, you were nearly there. All is missing is a call to MPI_Barrier() for all processes. This can be done this way:

#include <mpi.h>
#include <stdio.h>
#include <unistd.h>

int main(int argc, char** argv) 
{
    MPI_Init(&argc, &argv);
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int i;
    double centroid[3];/*ignore this array*/

    if (world_rank == 0) 
    {
        int destination;
        for (i=0; i<3; i++)
        {
            /*Ignore centroid buffer been sent for now*/

            destination = i+1;/*destination rank or process*/
            MPI_Send(&centroid, 3, MPI_DOUBLE, destination, 0, MPI_COMM_WORLD);
        }
        MPI_Barrier(MPI_COMM_WORLD);
        printf("\nEND: This need to print after all MPI_Send/MPI_Recv has been completed\n\n");
    } 
    else
    {   
        MPI_Recv(&centroid, 3, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);        
        sleep(1); /*This represent many calculations that will happen here later, instead of sleep*/
        printf("Printing at Rank/Process number: %d\n", world_rank);
        MPI_Barrier(MPI_COMM_WORLD);
    }


    MPI_Finalize();
    return 0;
}

With this barrier added, the code gives on my laptop:

~/tmp$ mpirun -n 4 ./a.out 
Printing at Rank/Process number: 1
Printing at Rank/Process number: 2
Printing at Rank/Process number: 3

END: This need to print after all MPI_Send/MPI_Recv has been completed

NB: in this case, the printing of ranks 1 to 3 was in order, but this is just by chance as this can happen in any order. Actually, the above code will guaranty the ordering of the calls to printf() between the process #0 and the other processes, but it won't guaranty the ordering of the printing to the screen, as buffering and such can happen and mess things up.
In fairness, it should work on most environments (most of the time). But strictly speaking, this isn't guaranteed.

Ancier answered 21/9, 2016 at 6:32 Comment(1)
Thank you for the clear message and showing with an example. I have a similar question in the following link if you can answer: #39968386Winton

© 2022 - 2024 — McMap. All rights reserved.