Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] parallel I/O on 64-bit indexed arays
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-06-06 21:07:19


If I understand your question correctly, this is *exactly* one of the reasons that the MPI Forum has been arguing about the use of a new type, "MPI_Count", for certain parameters that can get very, very large.

-----
Sidenote: I believe that a workaround for you is to create some new MPI datatypes (e.g., of type contiguous) that you can then use to multiply to get to the offsets that you want. I.e., if you make a type contig datatype of 4 doubles, you can still only specify up to 2B of them, but that will now get you up to an offset of (2B * 4 * sizeof(double)) rather than (2B * sizeof(double)). Make sense?
-----

This ticket for the MPI-3 standard is a first step in the right direction, but won't do everything you need (this is more FYI):

    https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/265

See the PDF attached to the ticket; it's going up for a "first reading" in a month. It'll hopefully be part of the MPI-3 standard by the end of the year (Fab Tillier, CC'ed, has been the chief proponent of this ticket for the past several months).

Quincey Koziol from the HDF group is going to propose a follow on to this ticket, specifically about the case you're referring to -- large counts for file functions and datatype constructors. Quincey -- can you expand on what you'll be proposing, perchance?

On Jun 6, 2011, at 5:26 AM, Troels Haugboelle wrote:

> Hello!
>
> The problem I face is not open-mpi specific, but I hope still the MPI wizards on the list can help me nonetheless.
>
> I am running and developing a large-scale scientific code written in Fortran90. One type of objects are global 1-D vectors, which contains data for particles in the application. I want to use MPI commands for saving the particle data but the global 1D array holding the data can reach up to 100 billion elements, and array offsets and global sizes have to be 64-bit.
>
> We use MPI_TYPE_CREATE_SUBARRAY for making a custom type and then MPI_FILE_SET_VIEW and MPI_FILE_WRITE_ALL for saving the 3D field data. This works with good performance on even very large installations / runs, but arguments to MPI_TYPE_CREATE_SUBARRAY are 32 bit integers, and that is not sufficient for the 1D-particle array. It needs 64-bit offsets and 64-bit global sizes. Local sizes for each thread are 32-bit though.
>
> What MPI call could I use to make a custom MPI type that describes the above data, which has 64-bit indices / global size ?
>
> As an example, for 3 threads the type layout would be :
>
> Thread 0: n0 reals, n1 holes, n2 holes
> Thread 1: n0 holes, n1 reals, n2 holes
> Thread 2: n0 holes, n1 holes, n2 reals
>
> The problem is I have to generalize that to 100 billion elements and 250k threads.
>
> As a remark; given that data keeps on becoming bigger: It would be very nice if the arguments to MPI_TYPE_CREATE_SUBARRAY, and arguments to other similar routines could be 64-bit.
>
> TIA,
>
> Troels
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/