Does it means we have to split the MPI_Get to many 2GB parts?
I have a MPI programm which first serialize a object, sending to other process. The char array after serialize is just below 2GB now, but the data is increasing.
One method is to build a large type with MPI_Type_vector, align the char array to the upper bound. Send and Recv using the created large size type. I think this is better than split send and recv.
Is there any graceful methods to avoid the problem? Or, I think, using size_t(or ssize_t) as the length parameters is more reasonable in new mpi implementation?
Sorry for the delay in replying. :-(
It's because for a 32 bit signed int, at 2GB, the value turns negative.
On Jun 29, 2010, at 1:46 PM, Price, Brian M (N-KCI) wrote:
> OpenMPI version: 1.3.3
> Platform: IBM P5
> Built OpenMPI 64-bit (i.e., CFLAGS=-q64, CXXFLAGS=-q64, -FFLAGS=-q64, -FCFLAGS=-q64)
> FORTRAN 90 test program:
> - Create a large array (3.6 GB of 32-bit INTs)
> - Initialize MPI
> - Create a large window to encompass large array (3.6 GB)
> - Have PE 0 get 1 32-bit INT from PE1
> o Lock the window
> o MPI_GET
> o Unlock the window
> - Free the window
> - Finalize MPI
> Built FORTRAN 90 test program 64-bit using OpenMPI wrapper compiler (mpif90 –q64).
> Why would this MPI_GET work fine with displacements all the way up to just under 2 GB, and then fail as soon as the displacement hits 2 GB?
> The MPI_GET succeeds with a displacement of 2147483644 (4 bytes less than 2 GB).
> I get a segmentation fault (address not mapped) when the displacement is 2147483648 (2 GB) or larger.
> users mailing list
For corporate legal information go to:
users mailing list