I would like to ask a question about rand() in the context of (Open)MPI. We were given an implementation task in our parallel programming course - create an MPI application in which all participant processes chose one leader (randomly - they have to "vote"). My program looks like this:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <limits.h>
#include <mpi.h>
int main (int argc, char *argv[]) {
int rank, size, vote, result;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
vote = rand(); // Each process' vote.
printf("%d: %d\n",rank+1, vote); // Only for debugging purposes here.
MPI_Allreduce(&vote, &result, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
result = (result & INT_MAX) % size + 1; // Select the leader.
printf("Process %*d/%d: %d is the leader.\n", (int)(ceil(log10(size+1))), rank+1, size, result);
MPI_Finalize();
return 0;
}
The problem is, when I compile and run it using OpenMPI1.6, every process' vote is 1804289383, no matter how many processes is the program started with. The number is always the same in every new run of the program. Thus, if I run mpirun -np 7 ./a.out, the leader is always number 5, if I run it with -np 8, the first process is always the leader and so on...
Could please anyone explain me what am I doing wrong and how to fix this behaviour?
Thank you very much.
rand
is hopelessly bad, even when seeded. – Natividadnativismrand
is bad, but not "generates the same sequence on distinct processes in subsequent runs" bad. It's more than suitable for the task at hand, and the single line of code that solves the OP's problem is a lot easier than importing any third-party RNG could possibly be. – Chamness