How to determine MPI rank/process number local to a socket/node
Asked Answered
L

4

19

Say, I run a parallel program using MPI. Execution command

mpirun -n 8 -npernode 2 <prg>

launches 8 processes in total. That is 2 processes per node and 4 nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual core) and network interconnect between nodes is InfiniBand.

Now, the rank number (or process number) can be determined with

int myrank;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

This returns a number between 0 and 7.

But, How can I determine the node number (in this case a number between 0 and 3) and the process number within a node (number between 0 and 1)?

Lied answered 26/1, 2012 at 17:30 Comment(0)
B
11

It depends on the MPI implementation - and there is no standard for this particular problem.

Open MPI has some environment variables that can help. OMPI_COMM_WORLD_LOCAL_RANK will give you the local rank within a node - ie. this is the process number which you are looking for. A call to getenv will therefore answer your problem - but this is not portable to other MPI implementations.

See this for the (short) list of variables in OpenMPI.

I don't know of a corresponding "node number".

Banger answered 29/1, 2012 at 10:24 Comment(2)
Thanks for the answer. The pasted link is broken. I found this link that works open-mpi.org/faq/?category=running#mpi-environmental-variablesLineberry
This should not be the accepted answer. Using MPI_Comm_split_type you can analyze the distribution over nodes without a problem. Maybe that didn't exist when this answer was written, but right now this answer is insufficient.Lionellionello
I
20

I believe you can achieve that with MPI-3 in this manner:

MPI_Comm shmcomm;
MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0,
                    MPI_INFO_NULL, &shmcomm);
int shmrank;
MPI_Comm_rank(shmcomm, &shmrank);
Ifill answered 19/10, 2016 at 5:15 Comment(1)
@aleixrocks That link describing MPI_Comm_split_type is failing. Here's an updated one, which just strips off ?Aspx...Maccabean
B
11

It depends on the MPI implementation - and there is no standard for this particular problem.

Open MPI has some environment variables that can help. OMPI_COMM_WORLD_LOCAL_RANK will give you the local rank within a node - ie. this is the process number which you are looking for. A call to getenv will therefore answer your problem - but this is not portable to other MPI implementations.

See this for the (short) list of variables in OpenMPI.

I don't know of a corresponding "node number".

Banger answered 29/1, 2012 at 10:24 Comment(2)
Thanks for the answer. The pasted link is broken. I found this link that works open-mpi.org/faq/?category=running#mpi-environmental-variablesLineberry
This should not be the accepted answer. Using MPI_Comm_split_type you can analyze the distribution over nodes without a problem. Maybe that didn't exist when this answer was written, but right now this answer is insufficient.Lionellionello
A
3

This exact problem is discussed on Markus Wittmann's Blog, MPI Node-Local Rank determination.

There, three strategies are suggested:

  1. A naive, portable solution employs MPI_Get_processor_name or gethostname to create an unique identifier for the node and performs an MPI_Alltoall on it. [...]
  2. [Method 2] relies on MPI_Comm_split, which provides an easy way to split a communicator into subgroups (sub-communicators). [...]
  3. Shared memory can be utilized, if available. [...]

For some working code (presumably LGPL licensed?), Wittmann links to MpiNodeRank.cpp from the APSM library.

Azurite answered 3/8, 2015 at 16:40 Comment(1)
Strategy 2 is elegant and portable.Lionellionello
L
1

Alternatively you can use

int MPI_Get_processor_name( char *name, int *resultlen )

to retreive node name, then use it as color in

int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm)

This is not as simple as MPI_Comm_split_type, however it offers a bit more freedom to split your comunicator the way you want.

Lilialiliaceous answered 25/5, 2018 at 9:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.