Brian,

I'm using OpenMPI 1.2.6 (r17946). Could you plese check which version works ? Thanks a lot,
Mi
Inactive hide details for "Brian W. Barrett" <brbarret@open-mpi.org>"Brian W. Barrett" <brbarret@open-mpi.org>


          "Brian W. Barrett" <brbarret@open-mpi.org>
          Sent by: users-bounces@open-mpi.org

          08/25/2008 01:44 PM
          Please respond to
          Open MPI Users <users@open-mpi.org>


To

Open MPI Users <users@open-mpi.org>

cc

Greg Rodgers/Poughkeepsie/IBM@IBMUS, Brad Benton/Austin/IBM@IBMUS

Subject

Re: [OMPI users] RDMA over IB between heterogenous processors with different endianness

On Mon, 25 Aug 2008, Mi Yan wrote:

> Does OpenMPI always use SEND/RECV protocol between heterogeneous
> processors with different endianness?
>
> I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
> but the bandwidth between the two heterogeneous nodes is slow, same as
> the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
> always used no matter btl_openib_flags is. Can I force OpenMPI to use
> RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need the
> support for endianness.

Which version of Open MPI are you using?  In recent versions (I don't
remember exactly when the change occured, unfortuantely), the decision
between send/recv and rdma was moved from being solely based on the
architecture of the remote process to being based on the architecture and
datatype.  It's possible this has been broken again, but there defintiely
was some window (possibly only on the development trunk) when that worked
correctly.

Brian
_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users