MPI-2.2 defines data length parameters as int
. This could be and usually is a problem on most 64-bit Unix systems since int
is still 32-bit. Such systems are referred to as LP64, which means that long
and pointers are 64-bit long, while int
is 32-bit in length. In contrast, Windows x64 is an LLP64 system, which means that both int
and long
are 32-bit long while long long
and pointers are 64-bit long. Linux for 64-bit x86 CPUs is an example of such a Unix-like system which is LP64.
Given all of the above MPI_Send
in MPI-2.2 implementations have a message size limit of 2^31-1
elements. One can overcome the limit by constructing a user-defined type (e.g. a contiguous type), which would reduce the amount of data elements. For example, if you register a contiguous type of 2^10
elements of some basic MPI type and then you use MPI_Send
to send 2^30
elements of this new type, it would result in a message of 2^40
elements of the basic type. Some MPI implementations may still fail in such cases if they use int
to handle elements count internally. Also it breaks MPI_Get_elements
and MPI_Get_count
as their output count
argument is of type int
.
MPI-3.0 addresses some of these issues. For example, it provides the MPI_Get_elements_x
and MPI_Get_count_x
operations which use the MPI_Count
typedef for their count
argument. MPI_Count
is defined so as to be able to hold pointer values, which makes it 64-bit long on most 64-bit systems. There are other extended calls (all end in _x
) that take MPI_Count
instead of int
. The old MPI_Get_elements
/ MPI_Get_count
operations are retained, but now they would return MPI_UNDEFINED
if the count is larger than what the int
output argument could hold (this clarification is not present in the MPI-2.2 standard and using very large counts in undefined behaviour there).
As pyCthon has already noted, the C++ bindings are deprecated in MPI-2.2 and were removed from MPI-3.0 as no longer supported by the MPI Forum. You should either use the C bindings or resort to 3rd party C++ bindings, e.g. Boost.MPI
.