Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres \(jsquyres\) (jsquyres_at_[hidden])
Date: 2006-05-25 09:51:39

Gleb just committed some fixes for the PPC64 issue last night
( It should only
affect the eager RDMA issues, but it would be a worthwhile datapoint if
you could test with (i.e., specify no MCA parameters on your mpirun
command line, so it should use RDMA by default).
I'm waiting for my own PPC64 machine to be reconfigured so that I can
test again; can you try with r10059 or later?


        From: users-bounces_at_[hidden]
[mailto:users-bounces_at_[hidden]] On Behalf Of Paul
        Sent: Wednesday, May 24, 2006 9:35 PM
        To: Open MPI Users
        Subject: Re: [OMPI users] pallas assistance ?
        It makes no difference on my end. Exact same error.
        On 5/24/06, Andrew Friedley <afriedle_at_[hidden]> wrote:

                Paul wrote:
> Somebody call orkin. ;-P
> Well I tried running it with things set as noted in
the bug report.
> However it doesnt change anything on my end. I am
willing to do any
> verification you guys need (time permitting and all).
Anything special
> needed to get mpi_latency to compile ? I can run that
to verify that
> things are actually working on my end.
> [root_at_something ompi]#
                Shouldn't the parameter be '--mca
> [root_at_something ompi]# /opt/ompi/bin/mpirun --mca
btl_openmpi_use_srq 1
> -np 2 -hostfile machine.list ./IMB-MPI1
                Same here - '--mca btl_openib_use_srq'
                users mailing list