Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] RDMA over IB between heterogenous processors with different endianness
From: Mi Yan (miyan_at_[hidden])
Date: 2008-08-25 13:57:30


Brian,

      I'm using OpenMPI 1.2.6 (r17946). Could you plese check which
version works ? Thanks a lot,
Mi

                                                                           
             "Brian W.
             Barrett"
             <brbarret_at_open-mp To
             i.org> Open MPI Users <users_at_[hidden]>
             Sent by: cc
             users-bounces_at_ope Greg
             n-mpi.org Rodgers/Poughkeepsie/IBM_at_IBMUS,
                                       Brad Benton/Austin/IBM_at_IBMUS
                                                                   Subject
             08/25/2008 01:44 Re: [OMPI users] RDMA over IB
             PM between heterogenous processors
                                       with different endianness
                                                                           
             Please respond to
              Open MPI Users
             <users_at_open-mpi.o
                    rg>
                                                                           
                                                                           

On Mon, 25 Aug 2008, Mi Yan wrote:

> Does OpenMPI always use SEND/RECV protocol between heterogeneous
> processors with different endianness?
>
> I tried btl_openib_flags to be 2 , 4 and 6 respectively to allowe RDMA,
> but the bandwidth between the two heterogeneous nodes is slow, same as
> the bandwidth when btl_openib_flags to be 1. Seems to me SEND/RECV is
> always used no matter btl_openib_flags is. Can I force OpenMPI to use
> RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need the
> support for endianness.

Which version of Open MPI are you using? In recent versions (I don't
remember exactly when the change occured, unfortuantely), the decision
between send/recv and rdma was moved from being solely based on the
architecture of the remote process to being based on the architecture and
datatype. It's possible this has been broken again, but there defintiely
was some window (possibly only on the development trunk) when that worked
correctly.

Brian
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users




graycol.gif
pic06815.gif
ecblank.gif