You said "separate MPI applications doing 1 to > N broadcasts over PVM".
You do not mean you are using pvm_bcast though - right?
If these N MPI applications are so independent that you could run one at a
time or run them on N different clusters and still get the result you want
(not the time to solution) then I cannot imagine how there could be cross
I have been assuming that when you describe this as an NxN problem, you
mean there is some desired interaction among the N MPI worlds.
If I have misunderstood and the N MPI worlds stared with N mpirun
operations under PVM are each semantically independent of the other (N-1)
then I am totally at a loss for an explanation.
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
users-bounces_at_[hidden] wrote on 08/11/2010 08:59:16 PM:
> [image removed]
> Re: [OMPI users] MPI_Bcast issue
> Randolph Pullen
> Open MPI Users
> 08/11/2010 09:01 PM
> Sent by:
> Please respond to Open MPI Users
> I (a single user) am running N separate MPI applications doing 1 to
> N broadcasts over PVM, each MPI application is started on each
> machine simultaneously by PVM - the reasons are back in the post
> The problem is that they somehow collide - yes I know this should
> not happen, the question is why.
> --- On Wed, 11/8/10, Richard Treumann <treumann_at_[hidden]> wrote:
> From: Richard Treumann <treumann_at_[hidden]>
> Subject: Re: [OMPI users] MPI_Bcast issue
> To: "Open MPI Users" <users_at_[hidden]>
> Received: Wednesday, 11 August, 2010, 11:34 PM
> I am confused about using multiple, concurrent mpirun operations.
> If there are M uses of mpirun and each starts N tasks (carried out
> under pvm or any other way) I would expect you to have M completely
> independent MPI jobs with N tasks (processes) each. You could have
> some root in each of the M MPI jobs do an MPI_Bcast to the other
> N-1) in that job but there is no way in MPI (without using
> accept.connect) to get tasks of job 0 to give data to tasks of jobs
> With M uses of mpirun, you have M worlds that are forever isolated
> from the other M-1 worlds (again, unless you do accept/connect)
> In what sense are you treating this as an single MxN application?
> ( I use M & N to keep them distinct. I assume if M == N, we have your
> Dick Treumann - MPI Team
> IBM Systems & Technology Group
> Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> Tele (845) 433-7846 Fax (845) 433-8363
> -----Inline Attachment Follows-----
> users mailing list
> users mailing list