Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Bcast issue
From: Randolph Pullen (randolph_pullen_at_[hidden])
Date: 2010-08-11 20:59:16

I (a single user) am running N separate MPI  applications doing 1 to N broadcasts over PVM, each MPI application is started on each machine simultaneously by PVM - the reasons are back in the post history.

The problem is that they somehow collide - yes I know this should not happen, the question is why.

--- On Wed, 11/8/10, Richard Treumann <treumann_at_[hidden]> wrote:

From: Richard Treumann <treumann_at_[hidden]>
Subject: Re: [OMPI users] MPI_Bcast issue
To: "Open MPI Users" <users_at_[hidden]>
Received: Wednesday, 11 August, 2010, 11:34 PM


I am confused about using multiple, concurrent mpirun operations.  If there are M uses of mpirun and each starts N tasks (carried out under pvm or any other way) I would expect you to have M completely independent MPI jobs with N tasks (processes) each.  You could have some root in each of the M MPI jobs do an MPI_Bcast to the other N-1) in that job but there is no way in MPI (without using accept.connect) to get tasks of job 0 to give data to tasks of jobs 1-(m-1).

With M uses of mpirun, you have M worlds that are forever isolated from the other M-1 worlds (again, unless you do accept/connect)

In what sense are you treating this as an single MxN application?   ( I use M & N to keep them distinct. I assume if M == N, we have your case)

Dick Treumann  -  MPI Team          

-----Inline Attachment Follows-----

users mailing list