Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Working with a CellBlade cluster
From: Lenny Verkhovsky (lenny.verkhovsky_at_[hidden])
Date: 2008-10-23 13:52:21


According to https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3 very
soon,
but you can download trunk version http://www.open-mpi.org/svn/ and check
if it works for you.

how can you check mapping CPUs by OS , my cat /proc/cpuinfo shows very
little info
# cat /proc/cpuinfo
processor : 0
cpu : Cell Broadband Engine, altivec supported
clock : 3200.000000MHz
revision : 48.0 (pvr 0070 3000)
processor : 1
cpu : Cell Broadband Engine, altivec supported
clock : 3200.000000MHz
revision : 48.0 (pvr 0070 3000)
processor : 2
cpu : Cell Broadband Engine, altivec supported
clock : 3200.000000MHz
revision : 48.0 (pvr 0070 3000)
processor : 3
cpu : Cell Broadband Engine, altivec supported
clock : 3200.000000MHz
revision : 48.0 (pvr 0070 3000)
timebase : 26666666
platform : Cell
machine : CHRP IBM,0793-1RZ

On Thu, Oct 23, 2008 at 3:00 PM, Mi Yan <miyan_at_[hidden]> wrote:

> Hi, Lenny,
>
> So rank file map will be supported in OpenMPI 1.3? I'm using OpenMPI1.2.6
> and did not find parameter "rmaps_rank_file_".
> Do you have idea when OpenMPI 1.3 will be available? OpenMPI 1.3 has quite
> a few features I'm looking for.
>
> Thanks,
> Mi
> [image: Inactive hide details for "Lenny Verkhovsky"
> <lenny.verkhovsky_at_[hidden]>]"Lenny Verkhovsky" <
> lenny.verkhovsky_at_[hidden]>
>
>
>
> *"Lenny Verkhovsky" <lenny.verkhovsky_at_[hidden]>*
> Sent by: users-bounces_at_[hidden]
>
> 10/23/2008 05:48 AM Please respond to
> Open MPI Users <users_at_[hidden]>
>
>
> To
>
> "Open MPI Users" <users_at_[hidden]>
> cc
>
>
> Subject
>
> Re: [OMPI users] Working with a CellBlade cluster
>
> Hi,
>
>
> If I understand you correctly the most suitable way to do it is by
> paffinity that we have in Open MPI 1.3 and the trank.
> how ever usually OS is distributing processes evenly between sockets by it
> self.
>
> There still no formal FAQ due to a multiple reasons but you can read how to
> use it in the attached scratch ( there were few name changings of the
> params, so check with ompi_info )
>
> shared memory is used between processes that share same machine, and openib
> is used between different machines ( hostnames ), no special mca params are
> needed.
>
> Best Regards
> Lenny,
>
>
>
> On Sun, Oct 19, 2008 at 10:32 AM, Gilbert Grosdidier <*
> grodid_at_[hidden]* <grodid_at_[hidden]>> wrote:
>
> Working with a CellBlade cluster (QS22), the requirement is to have one
> instance of the executable running on each socket of the blade (there
> are 2
> sockets). The application is of the 'domain decomposition' type, and
> each
> instance is required to often send/receive data with both the remote
> blades and
> the neighbor socket.
>
> Question is : which specification must be used for the mca btl
> component
> to force 1) shmem type messages when communicating with this neighbor
> socket,
> while 2) using openib to communicate with the remote blades ?
> Is '-mca btl sm,openib,self' suitable for this ?
>
> Also, which debug flags could be used to crosscheck that the messages
> are
> _actually_ going thru the right channel for a given channel, please ?
>
> We are currently using OpenMPI 1.2.5 shipped with RHEL5.2 (ppc64).
> Which version do you think is currently the most optimised for these
> processors and problem type ? Should we go towards OpenMPI 1.2.8
> instead ?
> Or even try some OpenMPI 1.3 nightly build ?
>
> Thanks in advance for your help, Gilbert.
>
> _______________________________________________
> users mailing list*
> **users_at_[hidden]* <users_at_[hidden]>*
> **http://www.open-mpi.org/mailman/listinfo.cgi/users*>
>
> *(See attached file: RANKS_FAQ.doc)*
> _______________________________________________
> users mailing list
> users_at_[hidden]
>
http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>




graycol.gif
ecblank.gif