mpi Questions

1

MPI runs my program with multiple processes. I'd one of these processes to sleep for a while so that it's using minimal CPU. std::this_thread::sleep_for() looks like what I want, but that thread ...
Banjermasin asked 5/5, 2016 at 23:55

1

Can someone explain and tell me more about MPI_Comm_split communicator? MPI_Comm_split(MPI_COMM_WORLD, my_row, my_rank,&my_row_comm); This is just example i met by reading some basic docume...
Bouchier asked 10/4, 2016 at 16:16

1

Solved

I need to update some old codes to work with the most recent version of OpenMPI, but I'm very confused by the new --map-by system. In particular, I'm not sure how to replace --cpus-per-proc N. Se...
Senhorita asked 7/4, 2016 at 22:46

2

Solved

I am trying to send data from process 0 to process 1. This program succeeds when the buffer size is less than 64kb, but hangs if the buffer gets much larger. The following code should reproduce th...
Mcdougald asked 3/4, 2016 at 17:21

1

Solved

I downloaded mpj-v0_44 and extracted it to C:\mpj Put Windows system env. variables MPJ_HOME to C:\mpj and in PATH added value C:\mpj\bin I added mpi.jar, mpj.jar in Project Structure -> Librari...
Olivann asked 1/4, 2016 at 12:30

0

I would like to create a loop over an list of parameters (say 50). The loop should be shared by several processors (say 4) in parallel. I would like each processor to pick the next parameter from...
Lightproof asked 25/3, 2016 at 16:34

1

Solved

I would like to display "hello world" via MPI on different Google cloud compute instances with the help of the following code: from mpi4py import MPI size = MPI.COMM_WORLD.Get_size() rank = MPI.C...

1

I have downloaded an R script from the web that says that it "must be run in an MPI environment". Now I know literally nothing of MPI, except for that it is used for parallel computing, a...
Intensity asked 3/5, 2015 at 9:31

3

Solved

I have the following program C++ program which uses no communication, and the same identical work is done on all cores, I know that this doesn't use parallel processing at all: unsigned n = 1300000...
Flatus asked 13/2, 2016 at 18:1

2

I have been following the docs for parallel programming in julia and for my mind, which thinks like openMP or MPI, I find the design choice quite strange. I have an application where I want data t...
Thinkable asked 5/2, 2016 at 23:59

3

Solved

From here : Is file append atomic in UNIX Consider a case where multiple processes open the same file and append to it. O_APPEND guarantees that seeking to the end of file and then beginning the w...
Fr asked 17/10, 2012 at 20:33

1

I'm having a problem with Rmpi wherein I try to load it and I get this error message: > library('Rmpi') Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared library '/usr/li...
Berke asked 1/10, 2013 at 16:6

1

Solved

Which is the difference and which one should one practically use? I have found this IBM link and this question MPI - one function for MPI_Init and MPI_Init_thread. I am interesting only in C, if th...

2

Solved

I am new to HPC and the task in hand is to do a performance analysis and comparison between MPICH and OpenMPI on a cluster which comprises of IBM servers equipped with dual-core AMD Opteron process...
Thesaurus asked 19/3, 2011 at 5:26

3

Consider an MPI application based on two steps which we shall call load and globalReduce. Just for simplicity the software is being described as such yet there is a lot more going on, so it is not ...
Lentigo asked 31/12, 2015 at 1:4

2

How do you send blocks of 2-D array to different processors? Suppose the 2D array size is 400x400 an I want to send blocks of sizes 100X100 to different processors. The idea is that each processor ...
Coimbatore asked 13/2, 2012 at 23:7

1

Compiler: gfortran-4.8.5 MPI library: OpenMPI-1.7.2 (preinstalled OpenSuSE 13.2) This program: use mpi implicit none real*16 :: x integer :: ierr, irank, type16 call MPI_Init(ierr) call...
Blaubok asked 13/10, 2015 at 17:33

3

Solved

I need to create a communicator with cube topology and then select face of the cube, using the MPI_Cart_Shift implemented messaging between processes that are on the brink. For example I am proce...
Dituri asked 9/12, 2015 at 22:27

1

The MPI-3 Standard states that MPI_Win_lock(...) with lock type MPI_LOCK_SHARED must be a blocking (exclusive) lock if and only if the origin and target process are the same. MPI_Win_lock_all is th...
Designation asked 23/4, 2015 at 13:43

2

I can use MPI_Comm_size to get the number of total processors. But how can I get the number of real physical machine?
Lialiabilities asked 6/12, 2015 at 7:49

4

Solved

I have a small program. #include "mpi.h" #include <stdio.h> int main(int argc, char *argv[]) { int rank, size; int buf; int err; MPI_Status status; err = MPI_Init(&argc, &argv); if...
Sunglass asked 11/10, 2010 at 0:25

3

Solved

What is the main difference betweeen the MPI_Allgather and MPI_Alltoall functions in MPI? I mean can some one give me examples where MPI_Allgather will be helpful and MPI_Alltoall will not? and v...
mpi
Libbi asked 24/2, 2013 at 6:13

1

Solved

I'm having trouble understanding the MPI_Type_create_struct method. Say we have a struct: struct foo(){ float value; char rank; } And we want to send this struct to another process. Conside ...
Messiah asked 9/11, 2015 at 21:51

0

I am trying to compile a code with gfortran. One of the first things that happens in the compilation is the creation of constants.mod. Soon after that gfortran tells me: Fatal Error: Cannot read ...
Amelioration asked 6/11, 2015 at 14:23

2

When calling MPI_BCAST, is there any implied synchronization? For example, if the sender process were to get to the MPI_BCAST before others could it do the BCAST and then continue without any ackno...
Sphygmomanometer asked 11/7, 2011 at 16:6

© 2022 - 2024 — McMap. All rights reserved.