Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV
From: Jeff Hammond (jeff.science_at_[hidden])
Date: 2013-10-08 18:14:20


"I have made a test case..." means there is little reason not to
attach said test case to the email for verification :-)

The following is in mpi.h.in in the OpenMPI trunk.

=========================
/*
 * Just in case you need it. :-)
 */
#define OPEN_MPI 1

/*
 * MPI version
 */
#define MPI_VERSION 2
#define MPI_SUBVERSION 2
=========================

Two things can be said from this:

(1) You can workaround this non-portable awfulness with the C
preprocess by testing for the OPEN_MPI symbol.

(2) OpenMPI claims to be compliant with the MPI 2.2 standard, hence
any failures to adhere to the behavior specified in that document for
MPI_IN_PLACE is erroneous.

Best,

Jeff

On Tue, Oct 8, 2013 at 2:40 PM, Gerlach, Charles A.
<charles.gerlach_at_[hidden]> wrote:
> I have an MPI code that was developed using MPICH1 and OpenMPI before the
> MPI2 standards became commonplace (before MPI_IN_PLACE was an option).
>
>
>
> So, my code has many examples of GATHERV, AGATHERV and SCATTERV, where I
> pass the same array in as the SEND_BUF and the RECV_BUF, and this has worked
> fine for many years.
>
>
>
> Intel MPI and MPICH2 explicitly disallow this behavior according to the MPI2
> standard. So, I have gone through and used MPI_IN_PLACE for all the
> GATHERV/SCATTERVs that used to pass the same array twice. This code now
> works with MPICH2 and Intel_MPI, but fails with OpenMPI-1.6.5 on multiple
> platforms and compilers.
>
>
>
> PLATFORM COMPILER SUCCESS? (For at least one
> simple example)
>
> ------------------------------------------------------------
>
> SLED 12.3 (x86-64) – Portland group - fails
>
> SLED 12.3 (x86-64) – g95 - fails
>
> SLED 12.3 (x86-64) – gfortran - works
>
>
>
> OS X 10.8 -- intel -fails
>
>
>
>
>
> In every case where OpenMPI fails with the MPI_IN_PLACE code, I can go back
> to the original code that passes the same array twice instead of using
> MPI_IN_PLACE, and it is fine.
>
>
>
> I have made a test case doing an individual GATHERV with MPI_IN_PLACE, and
> it works with OpenMPI. So it looks like there is some interaction with my
> code that is causing the problem. I have no idea how to go about trying to
> debug it.
>
>
>
>
>
> In summary:
>
>
>
> OpenMPI-1.6.5 crashes my code when I use GATHERV, AGATHERV, and SCATTERV
> with MPI_IN_PLACE.
>
> Intel MPI and MPICH2 work with my code when I use GATHERV, AGATHERV, and
> SCATTERV with MPI_IN_PLACE.
>
>
>
> OpenMPI-1.6.5 works with my code when I pass the same array to SEND_BUF and
> RECV_BUF instead of using MPI_IN_PLACE for those same GATHERV, AGATHERV, and
> SCATTERVs.
>
>
>
>
>
> -Charles
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Hammond
jeff.science_at_[hidden]