Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Bert Wesarg (wesarg_at_[hidden])
Date: 2006-07-18 14:00:30


hi,

s anwar wrote:
> Thank you for the clarification. Why is MPI_COMM_SELF not the correct
> communicator for MPI_Comm_spawn(). My application will have a single
> master only.
yes for a single master this should be the same, but never try this.
>
> Also, can I merge the intercommunicator into an intracommunicator via
> MPI_Intercomm_merge(intercomm, 0, &intracomm) and use MPI_Bcast(..., 0,
> intracomm) instead of sending MPI_ROOT or 0 (root's rank in local group).
dito, never try this, but it should work

bye
bert wesarg
>
> Thanks.
>
> On 7/18/06, *Bert Wesarg* <wesarg_at_[hidden]
> <mailto:wesarg_at_[hidden]>> wrote:
>
> hi,
>
> yes sorry for my first reply, my words were to rough.
>
> a bcast for a intercomm works this way (in your words):
>
> - your masters want to send a buffer to your slaves
> - one of the masters must provide the MPI_ROOT as root in the
> MPI_BCAST call
> - all slaves must provide the rank of this MPI_ROOT as the root argument
> - all others ranks from the masters, must provide MPI_PROC_NULL as root
>
> so the buffer will be send from MPI_ROOT to all slaves
>
> if this isn't that clear, you should understand the concept of mpi
> intercomms. a good starting point could be the standard:
> http://www.mpi-forum.org/docs/mpi-11-html/node111.html
> <http://www.mpi-forum.org/docs/mpi-11-html/node111.html>
>
> by the way:
> * your one_int is never intitialized
> * i don't know if the MPI_COMM_SELF is the right comm for the
> MPI_COMM_SPAWN
>
> by
> bert wesarg
>
> s anwar wrote:
> > I don't think I blamed the implementation in any way in my original
> > email. My intent is to gain understanding of why my code
> does/should not
> > work. I assumed that I was not passing the correct intercommunicator.
> > However, I am at a loss on how to construct a proper
> intercommunicator
> > in this case. You have mentioned that I haven't defined any group
> to be
> > the root group. Could you care to elaborate on how can I make a
> group a
> > root group?
> >
> > Thanks.
> >
> > On 7/18/06, *Bert Wesarg* < wesarg_at_[hidden]
> <mailto:wesarg_at_[hidden]>
> > <mailto:wesarg_at_[hidden]
> <mailto:wesarg_at_[hidden]>>> wrote:
> >
> > Hi,
> >
> > s anwar wrote:
> > > Please see attached source file.
> > >
> > > According to my understanding of MPI_Comm_spawn(), the
> > intercommunicator
> > > returned is the same as it is returned by
> MPI_Comm_get_parent() in
> > the
> > > spawned processes. I am assuming that there is one
> intercommunicator
> > > which contains all the (spawned) child processes as well as
> the parent
> > > process. If this is the case, then why does an MPI_Bcast()
> using
> > such an
> > > intercommunicator wait indefinately?
> >
> > your code from line 75:
> >
> > MPI_Bcast(&one_int, 1, MPI_INT, 0, intercomm);
> >
> > from the mpi 2 standard " 7.3.2.1 <http://7.3.2.1>
> <http://7.3.2.1>. Broadcast"
> >
> > "If comm is an intercommunicator, then the call involves all
> > processes in
> > the intercommunicator, but with one group (group A) defining
> the root
> > process. All processes in the other group (group B) pass the same
> > value in
> > argument root, which is the rank of the root in group A. The
> root passes
> > the value MPI_ROOT in root. All other processes in group A
> pass the
> > value
> > MPI_PROC_NULL in root. Data is broadcast from the root to all
> > processes in
> > group B. The receive buffer arguments of the processes in group B
> > must be
> > consistent with the send buffer argument of the root."
> >
> > so, you define no group to be the root group (group A). i
> don't know
> > what
> > schould happen, when no root group is defined, but your code
> firstly
> > don't
> > conform to the standard, so don't blame the implementation first.
> >
> > by
> > bert wesarg
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden] <mailto:users_at_[hidden]> <mailto:
> users_at_[hidden] <mailto:users_at_[hidden]>>
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> >
> >
> ------------------------------------------------------------------------
>
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden] <mailto:users_at_[hidden]>
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> users_at_[hidden] <mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users