Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Prototypes for Fortran MPI_ commands using 64-bit indexing
From: Jeff Hammond (jeff.science_at_[hidden])
Date: 2013-10-31 18:03:24

Stupid question:

Why not just make your first level internal API equivalent to the MPI
public API except for s/int/size_t/g and have the Fortran bindings
drop directly into that? Going through the C int-erface seems like a
recipe for endless pain...


On Thu, Oct 31, 2013 at 4:05 PM, Jeff Squyres (jsquyres)
<jsquyres_at_[hidden]> wrote:
> On Oct 30, 2013, at 11:55 PM, Jim Parker <jimparker96313_at_[hidden]> wrote:
>> Perhaps I should start with the most pressing issue for me. I need 64-bit indexing
>> @Martin,
>> you indicated that even if I get this up and running, the MPI library still uses signed 32-bit ints to count (your term), or index (my term) the recvbuffer lengths. More concretely,
>> in a call to MPI_Allgatherv( buffer, count, MPI_Integer, recvbuf, recv-count, displ, MPI_integer, MPI_COMM_WORLD, status, mpierr): count, recvcounts, and displs must be 32-bit integers, not 64-bit. Actually, all I need is displs to hold 64-bit values...
>> If this is true, then compiling OpenMPI this way is not a solution. I'll have to restructure my code to collect 31-bit chunks...
>> Not that it matters, but I'm not using DIRAC, but a custom code to compute circuit analyses.
> Yes, that is correct -- the MPI specification makes us use C "int" for outer level count specifications. We do use larger than that internally, though.
> The common workaround for this is to make your own MPI datatype -- perhaps an MPI_TYPE_VECTOR -- that strings together N contiguous datatypes, and then send M of those.
> For example, say you need to send 8B (billion) contiguous INTEGERs. You obviously can't represent 8B with a C int (or a 4 byte Fortran INTEGER). So what you would do is something like this (forgive me -- I'm a C guy):
> -----
> int my_buffer[8 billion];
> MPI_Datatype my_type;
> // This makes a datatype of 8 contiguous int's
> MPI_Type_vector(1, 8, 0, MPI_INT, &my_type);
> MPI_Type_commit(&my_type);
> MPI_Send(my_buffer, 1048576, my_type, ...);
> -----
> This basically sends 1B types that are 8 int's long, and is therefore an 8B int message.
> Make sense?
>> @Jeff,
>> Interesting, your runtime behavior has a different error than mine. You have problems with the passed variable tempInt, which would make sense for the reasons you gave. However, my problem involves the fact that the local variable "rank" gets overwritten by a memory corruption after MPI_RECV is called.
> Odd. :-\
>> Re: config.log. I will try to have the admin guy recompile tomorrow and see if I can get the log for you.
>> BTW, I'm using the gcc 4.7.2 compiler suite on a Rocks 5.4 HPC cluster. I use the options -m64 and -fdefault-integer-8
> Ok. I was using icc/ifort with -m64 and -i8.
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> _______________________________________________
> users mailing list
> users_at_[hidden]

Jeff Hammond