MPI_Send
may or may not block. It will block until the sender can reuse the sender buffer. Some implementations will return to the caller when the buffer has been sent to a lower communication layer. Some others will return to the caller when there's a matching MPI_Recv()
at the other end. So it's up to your MPI implementation whether if this program will deadlock or not.
Because of this program behaves differently among different MPI implementations, you may consider rewritting it so there won't be possible deadlocks:
MPI_Comm_rank (comm, &my_rank);
if (my_rank == 0) {
MPI_Send (sendbuf, count, MPI_INT, 1, tag, comm);
MPI_Recv (recvbuf, count, MPI_INT, 1, tag, comm, &status);
} else if (my_rank == 1) {
MPI_Recv (recvbuf, count, MPI_INT, 0, tag, comm, &status);
MPI_Send (sendbuf, count, MPI_INT, 0, tag, comm);
}
Always be aware that for every MPI_Send()
there must be a pairing MPI_Recv()
, both "parallel" in time. For example, this may end in deadlock because pairing send/recv calls are not aligned in time. They cross each other:
RANK 0 RANK 1
---------- -------
MPI_Send() --- ---- MPI_Send() |
--- --- |
------ |
-- | TIME
------ |
--- --- |
MPI_Recv() <-- ---> MPI_Recv() v
These processes, on the other way, won't end in deadlock, provided of course, that there are indeed two processes with ranks 0 and 1 in the same communicator domain.
RANK 0 RANK 1
---------- -------
MPI_Send() ------------------> MPI_Recv() |
| TIME
|
MPI_Recv() <------------------ MPI_Send() v
The above fixed program may fail if the size of the communicator com
does not allow rank 1 (only 0). That way, the if-else
won't take the else
route and thus, no process will be listening for the MPI_Send()
and rank 0 will deadlock.
If you need to use your current communication layout, then you may prefer to use MPI_Isend()
or MPI_Issend()
instead for nonblocking sends, thus avoiding deadlock.