Dear Gus,

That was a transcription error on my part to email. The Finalize is in the actual code I used.

Thanks,

Tim.

On Mar 6, 2012, at 11:43 AM, Gustavo Correa wrote:

Hi Timothy

There is no call to MPI_Finalize in the program.
Would this be the problem?

I hope this helps,
Gus Correa


On Mar 6, 2012, at 10:19 AM, Timothy Stitt wrote:

Will definitely try that. Thanks for the suggestion.

Basically, I need to be able to scatter values from a sender to a subset of ranks (as I scale my production code, I don't want to use MPI_COMM_WORLD, as the receiver list will be quite small) without the receivers knowing if they are to receive something in advance of the scatter.

Thanks again for any help,

Tim.

On Mar 6, 2012, at 10:17 AM, <nadia.derbey@bull.net> wrote:

Tim,

Since MPI_Comm_create sets the created communicator to MPI_COMM_NULL for the processes that are not in the group , may be preceding your collectives by a:
if (MPI_COMM_NULL != new_comm) {
  <your collective>
}
could be enough.

But may be I'm wrong: I'll let the specialists answer.

Regards,
Nadia

--
Nadia Derbey


users-bounces@open-mpi.org wrote on 03/06/2012 02:32:03 PM:

De : Timothy Stitt <Timothy.Stitt.9@nd.edu>
A : Open MPI Users <users@open-mpi.org>
Date : 03/06/2012 02:32 PM
Objet : Re: [OMPI users] Scatter+Group Communicator Issue
Envoyé par : users-bounces@open-mpi.org

Hi Nadia,

Thanks for the reply. This is were my confusion with the scatter
command comes in. I was really hoping that MPI_Scatter would
automagically ignore the ranks that are not part of the group
communicator, since this test code is part of something more
complicated were many sub-communicators are created over various
combinations of ranks, and used in various collective routines. Do I
really have to filter out manually the non-communicator ranks before
I call the scatter...it would be really nice if the scatter command
was 'smart' enough to do this for me by looking at the communicator
that is passed to the routine.

Thanks again,

Tim.

On Mar 6, 2012, at 8:28 AM, <nadia.derbey@bull.net> wrote:

Isn't it because you're calling MPI_Scatter() even on rank 2 which
is not part of your new_comm?

Regards,
Nadia

users-bounces@open-mpi.org wrote on 03/06/2012 01:52:06 PM:

De : Timothy Stitt <Timothy.Stitt.9@nd.edu>
A : "users@open-mpi.org" <users@open-mpi.org>
Date : 03/06/2012 01:52 PM
Objet : [OMPI users] Scatter+Group Communicator Issue
Envoyé par : users-bounces@open-mpi.org

Hi all,

I am scratching my head over what I think should be a relatively
simple group communicator operation. I am hoping some kind person
can put me out of my misery and figure out what I'm doing wrong.

Basically, I am trying to scatter a set of values to a subset of
process ranks (hence the need for a group communicator). When I run
the sample code over 4 processes (and scattering to 3 processes), I
am getting a group-communicator related error in the scatter operation:

[stats.crc.nd.edu:29285] *** An error occurred in MPI_Scatter
[stats.crc.nd.edu:29285] *** on communicator MPI_COMM_WORLD
[stats.crc.nd.edu:29285] *** MPI_ERR_COMM: invalid communicator
[stats.crc.nd.edu:29285] *** MPI_ERRORS_ARE_FATAL (your MPI job
will now abort)
Complete - Rank           1
Complete - Rank           0
Complete - Rank           3

The actual test code is below:

program scatter_bug

 use mpi

 implicit none

 integer :: ierr,my_rank,procValues(3),procRanks(3)
 integer :: in_cnt,orig_group,new_group,new_comm,out

 call MPI_INIT(ierr)
 call MPI_COMM_RANK(MPI_COMM_WORLD,my_rank,ierr)

 procRanks=(/0,1,3/)
 procValues=(/0,434,268/)
 in_cnt=3

 ! Create sub-communicator
 call MPI_COMM_GROUP(MPI_COMM_WORLD, orig_group, ierr)
 call MPI_Group_incl(orig_group, in_cnt, procRanks, new_group, ierr)
 call MPI_COMM_CREATE(MPI_COMM_WORLD, new_group, new_comm, ierr)

 call MPI_SCATTER(procValues, 1, MPI_INTEGER, out, 1, MPI_INTEGER,
0, new_comm, ierr);

 print *,"Complete - Rank", my_rank

end program scatter_bug

Thanks in advance for any advice you can give.

Regards.

Tim.
_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
<ATT00001..txt>

Tim Stitt PhD (User Support Manager).
Center for Research Computing | University of Notre Dame |
P.O. Box 539, Notre Dame, IN 46556 | Phone:  574-631-5287 | Email:
tstitt@nd.edu
_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users<ATT00001..txt>

Tim Stitt PhD (User Support Manager).
Center for Research Computing | University of Notre Dame |
P.O. Box 539, Notre Dame, IN 46556 | Phone:  574-631-5287 | Email: tstitt@nd.edu

_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Tim Stitt PhD (User Support Manager).
Center for Research Computing | University of Notre Dame |
P.O. Box 539, Notre Dame, IN 46556 | Phone:  574-631-5287 | Email: tstitt@nd.edu