Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] Working with a CellBlade cluster
From: Gilbert Grosdidier (grodid_at_[hidden])
Date: 2008-10-19 04:32:13

 Working with a CellBlade cluster (QS22), the requirement is to have one
instance of the executable running on each socket of the blade (there are 2
sockets). The application is of the 'domain decomposition' type, and each
instance is required to often send/receive data with both the remote blades and
the neighbor socket.

 Question is : which specification must be used for the mca btl component
to force 1) shmem type messages when communicating with this neighbor socket,
while 2) using openib to communicate with the remote blades ?
Is '-mca btl sm,openib,self' suitable for this ?

 Also, which debug flags could be used to crosscheck that the messages are
_actually_ going thru the right channel for a given channel, please ?

 We are currently using OpenMPI 1.2.5 shipped with RHEL5.2 (ppc64).
Which version do you think is currently the most optimised for these
processors and problem type ? Should we go towards OpenMPI 1.2.8 instead ?
Or even try some OpenMPI 1.3 nightly build ?

 Thanks in advance for your help, Gilbert.