Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] MPI_Recv_init_null_c from intel test suite fails vs ompi trunk
From: George Bosilca (bosilca_at_[hidden])
Date: 2014-04-24 18:35:05


The problem was not in the start but in the wait (hint: the status is
set in the wait). The difference I guess is r27880, which seems not to
be in the 1.8.

So, the 1.8 is not returning the correct status for persistent
inactive requests, but it does the right thing for MPI_PROC_NULL bound
requests.

  George.

On Thu, Apr 24, 2014 at 6:19 PM, Jeff Squyres (jsquyres)
<jsquyres_at_[hidden]> wrote:
> George --
>
> Any idea why it isn't failing on the v1.8 branch? The only major difference I see between mpi/c/start.c between trunk and v1.8 is your change.
>
>
>
> On Apr 24, 2014, at 2:08 PM, George Bosilca <bosilca_at_[hidden]> wrote:
>
>> r31524 is fixing this corner case. The problem was that persistent
>> request with MPI_RPOC_NULL were never activated, so the wait* function
>> was taking the if corresponding to inactive requests.
>>
>> George.
>>
>> On Thu, Apr 24, 2014 at 12:14 AM, Gilles Gouaillardet
>> <gilles.gouaillardet_at_[hidden]> wrote:
>>> Folks,
>>>
>>> Here is attached an oversimplified version of the MPI_Recv_init_null_c
>>> test from the
>>> intel test suite.
>>>
>>> the test works fine with v1.6, v1.7 and v1.8 branches but fails with the
>>> trunk.
>>>
>>> i wonder wether the bug is in OpenMPI or the test itself.
>>>
>>> on one hand, we could consider there is a bug in OpenMPI :
>>> status.MPI_SOURCE should be MPI_PROC_NULL since we explicitly posted a
>>> recv request with MPI_PROC_NULL.
>>>
>>> on the other hand, (mpi specs, chapter 3.7.3 and
>>> https://svn.open-mpi.org/trac/ompi/ticket/3475)
>>> we could consider the returned value is not significant, and hence
>>> MPI_Wait should return an
>>> empty status (and empty status has source=MPI_ANY_SOURCE per the MPI specs).
>>>
>>> for what it's worth, this test is a success with mpich (e.g.
>>> status.MPI_SOURCE is MPI_PROC_NULL).
>>>
>>>
>>> what is the correct interpretation of the MPI specs and what should be
>>> done ?
>>> (e.g. fix OpenMPI or fix/skip the test ?)
>>>
>>> Cheers,
>>>
>>> Gilles
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel_at_[hidden]
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>> Link to this post: http://www.open-mpi.org/community/lists/devel/2014/04/14589.php
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> Link to this post: http://www.open-mpi.org/community/lists/devel/2014/04/14596.php
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: http://www.open-mpi.org/community/lists/devel/2014/04/14599.php