Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Working with a CellBlade cluster
From: Mi Yan (miyan_at_[hidden])
Date: 2008-10-23 09:00:33


Hi, Lenny,

        So rank file map will be supported in OpenMPI 1.3? I'm using
OpenMPI1.2.6 and did not find parameter "rmaps_rank_file_".
       Do you have idea when OpenMPI 1.3 will be available? OpenMPI 1.3
has quite a few features I'm looking for.

Thanks,
Mi

                                                                           
             "Lenny
             Verkhovsky"
             <lenny.verkhovsky To
             @gmail.com> "Open MPI Users"
             Sent by: <users_at_[hidden]>
             users-bounces_at_ope cc
             n-mpi.org
                                                                   Subject
                                       Re: [OMPI users] Working with a
             10/23/2008 05:48 CellBlade cluster
             AM
                                                                           
                                                                           
             Please respond to
              Open MPI Users
             <users_at_open-mpi.o
                    rg>
                                                                           
                                                                           

Hi,

If I understand you correctly the most suitable way to do it is by
paffinity that we have in Open MPI 1.3 and the trank.
how ever usually OS is distributing processes evenly between sockets by it
self.

There still no formal FAQ due to a multiple reasons but you can read how to
use it in the attached scratch ( there were few name changings of the
params, so check with ompi_info )

shared memory is used between processes that share same machine, and openib
is used between different machines ( hostnames ), no special mca params are
needed.

Best Regards
Lenny,

On Sun, Oct 19, 2008 at 10:32 AM, Gilbert Grosdidier <grodid_at_[hidden]>
wrote:
   Working with a CellBlade cluster (QS22), the requirement is to have one
  instance of the executable running on each socket of the blade (there are
  2
  sockets). The application is of the 'domain decomposition' type, and each
  instance is required to often send/receive data with both the remote
  blades and
  the neighbor socket.

   Question is : which specification must be used for the mca btl component
  to force 1) shmem type messages when communicating with this neighbor
  socket,
  while 2) using openib to communicate with the remote blades ?
  Is '-mca btl sm,openib,self' suitable for this ?

   Also, which debug flags could be used to crosscheck that the messages
  are
  _actually_ going thru the right channel for a given channel, please ?

   We are currently using OpenMPI 1.2.5 shipped with RHEL5.2 (ppc64).
  Which version do you think is currently the most optimised for these
  processors and problem type ? Should we go towards OpenMPI 1.2.8
  instead ?
  Or even try some OpenMPI 1.3 nightly build ?

   Thanks in advance for your help, Gilbert.

  _______________________________________________
  users mailing list
  users_at_[hidden]
  http://www.open-mpi.org/mailman/listinfo.cgi/users
(See attached file: RANKS_FAQ.doc)
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users





graycol.gif
pic14305.gif
ecblank.gif