Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Problems with GATHERV on one process
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-12-13 08:01:48


Correct. Here's the original commit that fixed the problem:

     https://svn.open-mpi.org/trac/ompi/changeset/16360

And the commit to the v1.2 branch:

     https://svn.open-mpi.org/trac/ompi/changeset/16519

On Dec 12, 2007, at 2:43 PM, Moreland, Kenneth wrote:

> Thanks Tim. I've since noticed similar problems with MPI_Allgatherv
> and
> MPI_Scatterv. I'm guessing they are all related. Do you happen to
> know
> if those are being fixed as well?
>
> -Ken
>
>> -----Original Message-----
>> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]]
> On
>> Behalf Of Tim Mattox
>> Sent: Tuesday, December 11, 2007 3:34 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] Problems with GATHERV on one process
>>
>> Hello Ken,
>> This is a known bug, which is fixed in the upcoming 1.2.5 release.
>> We
>> expect 1.2.5
>> to come out very soon. We should have a new release candidate for
> 1.2.5
>> posted
>> by tomorrow.
>>
>> See these tickets about the bug if you care to look:
>> https://svn.open-mpi.org/trac/ompi/ticket/1166
>> https://svn.open-mpi.org/trac/ompi/ticket/1157
>>
>> On Dec 11, 2007 2:48 PM, Moreland, Kenneth <kmorel_at_[hidden]> wrote:
>>> I recently ran into a problem with GATHERV while running some
> randomized
>>> tests on my MPI code. The problem seems to occur when running
>>> MPI_Gatherv with a displacement on a communicator with a single
> process.
>>> The code listed below exercises this errant behavior. I have tried
> it
>>> on OpenMPI 1.1.2 and 1.2.4.
>>>
>>> Granted, this is not a situation that one would normally run into in
> a
>>> real application, but I just wanted to check to make sure I was not
>>> doing anything wrong.
>>>
>>> -Ken
>>>
>>>
>>>
>>> #include <mpi.h>
>>>
>>> #include <stdlib.h>
>>> #include <stdio.h>
>>>
>>> int main(int argc, char **argv)
>>> {
>>> int rank;
>>> MPI_Comm smallComm;
>>> int senddata[4], recvdata[4], length, offset;
>>>
>>> MPI_Init(&argc, &argv);
>>>
>>> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>>
>>> // Split up into communicators of size 1.
>>> MPI_Comm_split(MPI_COMM_WORLD, rank, 0, &smallComm);
>>>
>>> // Now try to do a gatherv.
>>> senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] =
> 8;
>>> recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] =
> 0;
>>> length = 3;
>>> offset = 1;
>>> MPI_Gatherv(senddata, length, MPI_INT,
>>> recvdata, &length, &offset, MPI_INT, 0, smallComm);
>>> if (senddata[0] != recvdata[offset])
>>> {
>>> printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]);
>>> }
>>> else
>>> {
>>> printf("%d: Everything OK.\n", rank);
>>> }
>>>
>>> return 0;
>>> }
>>>
>>> **** Kenneth Moreland
>>> *** Sandia National Laboratories
>>> ***********
>>> *** *** *** email: kmorel_at_[hidden]
>>> ** *** ** phone: (505) 844-8919
>>> *** fax: (505) 845-0833
>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>> --
>> Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
>> tmattox_at_[hidden] || timattox_at_[hidden]
>> I'm a bright... http://www.the-brights.net/
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
Cisco Systems