Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Scatter+Group Communicator Issue
From: Gustavo Correa (gus_at_[hidden])
Date: 2012-03-06 11:43:24


Hi Timothy

There is no call to MPI_Finalize in the program.
Would this be the problem?

I hope this helps,
Gus Correa

On Mar 6, 2012, at 10:19 AM, Timothy Stitt wrote:

> Will definitely try that. Thanks for the suggestion.
>
> Basically, I need to be able to scatter values from a sender to a subset of ranks (as I scale my production code, I don't want to use MPI_COMM_WORLD, as the receiver list will be quite small) without the receivers knowing if they are to receive something in advance of the scatter.
>
> Thanks again for any help,
>
> Tim.
>
> On Mar 6, 2012, at 10:17 AM, <nadia.derbey_at_[hidden]> wrote:
>
>> Tim,
>>
>> Since MPI_Comm_create sets the created communicator to MPI_COMM_NULL for the processes that are not in the group , may be preceding your collectives by a:
>> if (MPI_COMM_NULL != new_comm) {
>> <your collective>
>> }
>> could be enough.
>>
>> But may be I'm wrong: I'll let the specialists answer.
>>
>> Regards,
>> Nadia
>>
>> --
>> Nadia Derbey
>>
>>
>> users-bounces_at_[hidden] wrote on 03/06/2012 02:32:03 PM:
>>
>> > De : Timothy Stitt <Timothy.Stitt.9_at_[hidden]>
>> > A : Open MPI Users <users_at_[hidden]>
>> > Date : 03/06/2012 02:32 PM
>> > Objet : Re: [OMPI users] Scatter+Group Communicator Issue
>> > Envoyé par : users-bounces_at_[hidden]
>> >
>> > Hi Nadia,
>> >
>> > Thanks for the reply. This is were my confusion with the scatter
>> > command comes in. I was really hoping that MPI_Scatter would
>> > automagically ignore the ranks that are not part of the group
>> > communicator, since this test code is part of something more
>> > complicated were many sub-communicators are created over various
>> > combinations of ranks, and used in various collective routines. Do I
>> > really have to filter out manually the non-communicator ranks before
>> > I call the scatter...it would be really nice if the scatter command
>> > was 'smart' enough to do this for me by looking at the communicator
>> > that is passed to the routine.
>> >
>> > Thanks again,
>> >
>> > Tim.
>> >
>> > On Mar 6, 2012, at 8:28 AM, <nadia.derbey_at_[hidden]> wrote:
>> >
>> > Isn't it because you're calling MPI_Scatter() even on rank 2 which
>> > is not part of your new_comm?
>> >
>> > Regards,
>> > Nadia
>> >
>> > users-bounces_at_[hidden] wrote on 03/06/2012 01:52:06 PM:
>> >
>> > > De : Timothy Stitt <Timothy.Stitt.9_at_[hidden]>
>> > > A : "users_at_[hidden]" <users_at_[hidden]>
>> > > Date : 03/06/2012 01:52 PM
>> > > Objet : [OMPI users] Scatter+Group Communicator Issue
>> > > Envoyé par : users-bounces_at_[hidden]
>> > >
>> > > Hi all,
>> > >
>> > > I am scratching my head over what I think should be a relatively
>> > > simple group communicator operation. I am hoping some kind person
>> > > can put me out of my misery and figure out what I'm doing wrong.
>> > >
>> > > Basically, I am trying to scatter a set of values to a subset of
>> > > process ranks (hence the need for a group communicator). When I run
>> > > the sample code over 4 processes (and scattering to 3 processes), I
>> > > am getting a group-communicator related error in the scatter operation:
>> > >
>> > > > [stats.crc.nd.edu:29285] *** An error occurred in MPI_Scatter
>> > > > [stats.crc.nd.edu:29285] *** on communicator MPI_COMM_WORLD
>> > > > [stats.crc.nd.edu:29285] *** MPI_ERR_COMM: invalid communicator
>> > > > [stats.crc.nd.edu:29285] *** MPI_ERRORS_ARE_FATAL (your MPI job
>> > > will now abort)
>> > > > Complete - Rank 1
>> > > > Complete - Rank 0
>> > > > Complete - Rank 3
>> > >
>> > > The actual test code is below:
>> > >
>> > > program scatter_bug
>> > >
>> > > use mpi
>> > >
>> > > implicit none
>> > >
>> > > integer :: ierr,my_rank,procValues(3),procRanks(3)
>> > > integer :: in_cnt,orig_group,new_group,new_comm,out
>> > >
>> > > call MPI_INIT(ierr)
>> > > call MPI_COMM_RANK(MPI_COMM_WORLD,my_rank,ierr)
>> > >
>> > > procRanks=(/0,1,3/)
>> > > procValues=(/0,434,268/)
>> > > in_cnt=3
>> > >
>> > > ! Create sub-communicator
>> > > call MPI_COMM_GROUP(MPI_COMM_WORLD, orig_group, ierr)
>> > > call MPI_Group_incl(orig_group, in_cnt, procRanks, new_group, ierr)
>> > > call MPI_COMM_CREATE(MPI_COMM_WORLD, new_group, new_comm, ierr)
>> > >
>> > > call MPI_SCATTER(procValues, 1, MPI_INTEGER, out, 1, MPI_INTEGER,
>> > > 0, new_comm, ierr);
>> > >
>> > > print *,"Complete - Rank", my_rank
>> > >
>> > > end program scatter_bug
>> > >
>> > > Thanks in advance for any advice you can give.
>> > >
>> > > Regards.
>> > >
>> > > Tim.
>> > > _______________________________________________
>> > > users mailing list
>> > > users_at_[hidden]
>> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> > <ATT00001..txt>
>> >
>> > Tim Stitt PhD (User Support Manager).
>> > Center for Research Computing | University of Notre Dame |
>> > P.O. Box 539, Notre Dame, IN 46556 | Phone: 574-631-5287 | Email:
>> > tstitt_at_[hidden]
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden]
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users>
>
> Tim Stitt PhD (User Support Manager).
> Center for Research Computing | University of Notre Dame |
> P.O. Box 539, Notre Dame, IN 46556 | Phone: 574-631-5287 | Email: tstitt_at_[hidden]
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
>
http://www.open-mpi.org/mailman/listinfo.cgi/users