Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] Simplified: Misuse or bug with nested types?
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2013-04-23 18:03:23


Sorry for the delay.

My C++ is a bit rusty, but this does not seem correct to me.

You're making the datatypes relative to an arbitrary address (&lPtrBase) in a static method on each class. You really need the datatypes to be relative to each instance's *this* pointer.

Doing so allows MPI to read/write the data relative to the specific instance of the objects that you're trying to send/receive.

Make sense?

On Apr 23, 2013, at 5:01 PM, Eric Chamberland <Eric.Chamberland_at_[hidden]> wrote:

> another information: I just tested the example with Intel MPI 4.0.1.007 and it works correctly...
>
> So the problem seems to be only with OpenMPI... which is the default distribution we use... :-/
>
> Is my example code too long?
>
> Eric
>
> Le 2013-04-23 09:55, Eric Chamberland a écrit :
>> Sorry,
>>
>> here is the attachment...
>>
>> Eric
>>
>> On 04/23/2013 09:54 AM, Eric Chamberland wrote:
>>> Hi,
>>>
>>> I have sent a previous message showing something that I think is a bug
>>> (or maybe a misuse, but...).
>>>
>>> I worked on the example sent to have it simplified: now it is almost
>>> half of the lines of code and the structures are more simple... but
>>> still showing the wrong behaviour.
>>>
>>> Briefly, we construct different MPI_datatype and nests them into a final
>>> type which is a:
>>> {MPI_LONG,{{MPI_LONG,MPI_CHAR}*2}
>>>
>>> Here is the output from OpenMPI 1.6.3:
>>>
>>> Rank 0 send this:
>>> i: 0 => {{0},{{3,%},{7,5}}}
>>> i: 1 => {{1},{{3,%},{7,5}}}
>>> i: 2 => {{2},{{3,%},{7,5}}}
>>> i: 3 => {{3},{{3,%},{7,5}}}
>>> i: 4 => {{4},{{3,%},{7,5}}}
>>> i: 5 => {{5},{{3,%},{7,5}}}
>>> MPI_Recv returned success and everything in MPI_Status is correct after
>>> receive.
>>> Rank 1 received this:
>>> i: 0 => {{0},{{3,%},{-999,$}}} *** ERROR ****
>>> i: 1 => {{1},{{3,%},{-999,$}}} *** ERROR ****
>>> i: 2 => {{2},{{3,%},{-999,$}}} *** ERROR ****
>>> i: 3 => {{3},{{3,%},{-999,$}}} *** ERROR ****
>>> i: 4 => {{4},{{3,%},{-999,$}}} *** ERROR ****
>>> i: 5 => {{5},{{3,%},{-999,$}}} *** ERROR ****
>>>
>>> Here is the expected output, obtained with mpich-3.0.3:
>>>
>>> Rank 0 send this:
>>> i: 0 => {{0},{{3,%},{7,5}}}
>>> i: 1 => {{1},{{3,%},{7,5}}}
>>> i: 2 => {{2},{{3,%},{7,5}}}
>>> i: 3 => {{3},{{3,%},{7,5}}}
>>> i: 4 => {{4},{{3,%},{7,5}}}
>>> i: 5 => {{5},{{3,%},{7,5}}}
>>> MPI_Recv returned success and everything in MPI_Status is correct after
>>> receive.
>>> Rank 1 received this:
>>> i: 0 => {{0},{{3,%},{7,5}}} OK
>>> i: 1 => {{1},{{3,%},{7,5}}} OK
>>> i: 2 => {{2},{{3,%},{7,5}}} OK
>>> i: 3 => {{3},{{3,%},{7,5}}} OK
>>> i: 4 => {{4},{{3,%},{7,5}}} OK
>>> i: 5 => {{5},{{3,%},{7,5}}} OK
>>>
>>> Is it related to the bug reported here:
>>> http://www.open-mpi.org/community/lists/devel/2013/04/12267.php ?
>>>
>>> Thanks,
>>>
>>> Eric
>>>
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/