I've got an MPI program consisting of one master process that hands off commands to a bunch of slave processes. Upon receiving a command, a slave just calls system() to do it. While the slaves are waiting for a command, they are consuming 100% of their respective CPUs. It appears that Probe() is sitting in a tight loop, but that's only a guess. What do you think might be causing this, and what could I do to fix it?
Here's the code in the slave process that waits for a command. Watching the log and the top command at the same time suggests that when the slaves are consuming their CPUs, they are inside this function.
MpiMessage
Mpi::BlockingRecv() {
LOG(8, "BlockingRecv");
MpiMessage result;
MPI::Status status;
MPI::COMM_WORLD.Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, status);
result.source = status.Get_source();
result.tag = status.Get_tag();
int num_elems = status.Get_count(MPI_CHAR);
char buf[num_elems + 1];
MPI::COMM_WORLD.Recv(
buf, num_elems, MPI_CHAR, result.source, result.tag
);
result.data = buf;
LOG(7, "BlockingRecv about to return (%d, %d)", result.source, result.tag);
return result;
}
fork()
on systems with OpenFabrics interconnect (InfiniBand or iWARP). – Hux