On Sat, Jun 11, 2011 at 5:17 PM, Ole Kliemann <email@example.com> wrote:
On Sat, Jun 11, 2011 at 07:24:24AM -0600, Ralph Castain wrote:I'm using the cluster of the university where I work und I'm not the
> Oh my - that is such an old version! Any reason for using it instead of something more recent?
admin. So I'm going with what is installed there.
It's the first time I'm using MPI. Before I complain to the admins about
old versions or anything else, I'd like to check whether my code
actually should be okay in regard to MPI specifications.
> On Jun 11, 2011, at 8:43 AM, Ole Kliemann wrote:
> > Hi everyone!
> > I'm trying to use MPI on a cluster running OpenMPI 1.2.4 and starting
> > processes through PBSPro_126.96.36.199766. I've been running into a couple
> > of performance and deadlock problems and like to check whether I'm
> > making a mistake.
> > One of the deadlocks I managed to boil down to the attached example. I
> > run it on 8 cores. It usually deadlocks with all except one process
> > showing
> > start barrier
> > as last output.
> > The one process out of order shows:
> > start getting local
> > My question at this point is simply whether this is expected behaviour
> > of OpenMPI.
> > Thanks in advance!
> > Ole
> > <mpi_barrier.cc>_______________________________________________
> > users mailing list
> > firstname.lastname@example.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> users mailing list
users mailing list