Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Paul (paul.lundin_at_[hidden])
Date: 2006-05-25 19:37:22


Already done. I grabbed the rc5 this morning and rebuilt everything. I am
still having the same issue. I sent a message to the openib list about it. I
wont cross-spam this list with that message. I was wondering if you have
access to that list or not ? I can send you a copy if you need it. The
summary is that there are numerous apparent issues, though I have made a
little headway with regards to what the issues are, no gaurantees that I am
right in my guessing.

Its not a problem. At the moment I have the resources to chase it. Just let
me know what needs to be done.

On 5/25/06, Jeff Squyres (jsquyres) <jsquyres_at_[hidden]> wrote:
>
> In further discussions with other OMPI team members, I double checked
> (should have checked this in the beginning, sorry): OFED 1.0rc4 does
> not support 64 bit on PPC64 platforms; it only supports 32 bit on PPC64
> platforms.
>
> Mellanox says that 1.0rc5 (cut this morning) supports 64 bit on PPC64
> platforms.
>
> Can you try upgrading? Sorry for all the hassle. :-(
>
>
> ------------------------------
> *From:* users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] *On
> Behalf Of *Paul
> *Sent:* Thursday, May 25, 2006 11:51 AM
>
> *To:* Open MPI Users
> *Subject:* Re: [OMPI users] pallas assistance ?
>
> Okay, I rebuilt using those diffs. Currently I am still having issues with
> pallas however. That being said I think my issue is more with
> compiling/linking pallas. Here is my pallas make_$arch file:
>
> MPI_HOME = /opt/ompi/
> MPI_INCLUDE = $(MPI_HOME)/include
> LIB_PATH =
> LIBS =
> CC = ${MPI_HOME}/bin/mpicc
> OPTFLAGS = -O
> CLINKER = ${CC}
> LDFLAGS = -m64
> CPPFLAGS = -m64
>
> Again ldd'ing the IMB-MPI1 file works fine, and the compilation completes
> okay.
>
> On 5/25/06, Jeff Squyres (jsquyres) <jsquyres_at_[hidden]> wrote:
> >
> > Gleb just committed some fixes for the PPC64 issue last night (https://svn.open-mpi.org/trac/ompi/changeset/10059
> > ). It should only affect the eager RDMA issues, but it would be a
> > worthwhile datapoint if you could test with (i.e., specify no MCA
> > parameters on your mpirun command line, so it should use RDMA by default).
> >
> > I'm waiting for my own PPC64 machine to be reconfigured so that I can
> > test again; can you try with r10059 or later?
> >
> > ------------------------------
> > *From:* users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]]
> > *On Behalf Of *Paul
> > *Sent:* Wednesday, May 24, 2006 9:35 PM
> > *To:* Open MPI Users
> > *Subject:* Re: [OMPI users] pallas assistance ?
> >
> > It makes no difference on my end. Exact same error.
> >
> > On 5/24/06, Andrew Friedley <afriedle_at_[hidden]> wrote:
> > >
> > > Paul wrote:
> > > > Somebody call orkin. ;-P
> > > > Well I tried running it with things set as noted in the bug report.
> > > > However it doesnt change anything on my end. I am willing to do any
> > > > verification you guys need (time permitting and all). Anything
> > > special
> > > > needed to get mpi_latency to compile ? I can run that to verify that
> > >
> > > > things are actually working on my end.
> > > >
> > > > [root_at_something ompi]#
> > > Shouldn't the parameter be '--mca btl_openib_use_eager_rdma'?
> > >
> > > > [root_at_something ompi]# /opt/ompi/bin/mpirun --mca
> > > btl_openmpi_use_srq 1
> > > > -np 2 -hostfile machine.list ./IMB-MPI1
> > >
> > > Same here - '--mca btl_openib_use_srq'
> > >
> > > Andrew
> > > _______________________________________________
> > > users mailing list
> > > users_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>