Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Development mailing list

Subject: Re: [hwloc-devel] dplace
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2010-04-20 12:17:56


Does dplace do something clever like reading the MPI rank and
communicator size and try to figure out how to distribute among the
cores so as to maximize memory bandwidth or cache sharing ?

Brice

Michael Raymond wrote:
> As of SGI ProPack 7 dplace uses hwloc internally to specify stride
> patterns. For example:
>
> mpirun -np 8 dplace -c SC a.out
>
> means to pin ranks to every core inside a socket before jumping to the
> next socket and doing the same.
>
> From the man page:
>
> For striding patterns any
> subset of the characters (B)lade, (S)ocket, (C)ore,
> (T)hread may
> be used and their ordering specifies the nesting of the
> itera-
> tion. For example "SC" means to iterate all the
> cores in a
> socket before moving to the next CPU socket, while "CB"
> means to
> pin to the first core of each blade, then the second
> core of
> every blade, etc.
>
> I've been trying to evangelize more hwloc usage with mixed results.
>
> Brice Goglin wrote:
>
>> I discovered "dplace" today. I don't know how many people install/use it
>> on their cluster, but it's something that looks interesting when you
>> don't have advanced binding capabilities in the MPI implementation. For
>> instance, you could do:
>> $ mpirun -np 8 dplace 0,4,2,6,1,5,3,7 myprogram
>> to bind process ranks according to the machine topology.
>>
>> hwloc-calc can easily generate such list of physical processors, for
>> instance:
>> $ hwloc-calc --physical proc:all --pulist
>> 0,4,2,6,1,5,3,7
>> or even restrict of one PU per socket with:
>> $ hwloc-calc --physical socket:all.core:0 --pulist
>> 0,1
>>
>> So hwloc-calc could help dplace significantly. Maybe we should put such
>> examples somewhere in the doc.
>>
>> Brice
>>
>> _______________________________________________
>> hwloc-devel mailing list
>> hwloc-devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-devel
>>
>
>