Open MPI logo

Hardware Locality Users' Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Users mailing list

Subject: Re: [hwloc-users] Problem getting cpuset of MPI task
From: Hendryk Bockelmann (bockelmann_at_[hidden])
Date: 2011-02-10 03:07:24


Hey Brice,

I already though so, but thank you for the explanation.
On our clusters the job scheduler binds the MPI tasks, but it is not
always clear to which resources. So for us it would be great to know
where a task runs such that we might adopt the MPI communicators to
increase performance.
Maybe just a note on the hwloc output on the cluster: while on my locale
machine all MPI tasks are able to explore the whole topology, on the
cluster each task only sees itself, e.g. for task 7:

7:Machine#0(Backend=AIXOSName=AIXOSRelease=1OSVersion=6HostName=p191Architecture=00C83AC24C00),
cpuset: 0x0000c000
7: NUMANode#0, cpuset: 0x0000c000
7: L2Cache#0(0KB line=0), cpuset: 0x0000c000
7: Core#0, cpuset: 0x0000c000
7: PU, cpuset: 0x00004000
7: PU#0, cpuset: 0x00008000
7:--> root_cpuset of process 7 is 0x0000c000

Nevertheless, all MPI-tasks have different cpusets and since the nodes
are homogeneous one can guess the whole binding using the information of
lstopo and the HostName of each task. Perhaps you can tell me whether
such a restricted topology is due to hwloc or due to the fixed binding
by the job scheduler?

Greetings,
Hendryk

On 09/02/11 17:12, Brice Goglin wrote:
> Le 09/02/2011 16:53, Hendryk Bockelmann a écrit :
>> Since I am new to hwloc there might be a misunderstanding from my
>> side, but I have a problem getting the cpuset of MPI tasks. I just
>> want to run a simple MPI program to see on which cores (or CPUs in
>> case of hyperthreading or SMT) the tasks run, so that I can arrange my
>> MPI communicators.
>>
>> For the program below I get the following output:
>>
>> Process 0 of 2 on tide
>> Process 1 of 2 on tide
>> --> cpuset of process 0 is 0x0000000f
>> --> cpuset of process 0 after singlify is 0x00000001
>> --> cpuset of process 1 is 0x0000000f
>> --> cpuset of process 1 after singlify is 0x00000001
>>
>> So why do both MPI tasks report the same cpuset?
>
> Hello Hendryk,
>
> Your processes are not bound, there may run anywhere they want.
> hwloc_get_cpubind() tells you where they are bound. That's why the
> cpuset is 0x0000f first (all the existing logical processors in the
> machine).
>
> You want to know where they actually run. It's different from where
> there are bound. The former is included in the latter. The former is a
> single processor, while the later may be any combination of any processors).
>
> hwloc cannot tell you where a task run. But I am looking at implementing
> it. I actually sent a patch to hwloc-devel about it yesterday [1]. You
> would just have to replace get_cpubind with get_cpuexec (or whatever the
> final function name is).
>
> You should note that such a function would not be guaranteed to return
> something true since the process may migrate to another processor in the
> meantime.
>
> Also note that hwloc_bitmap_singlify is usually used to "simplify" a
> cpuset (to avoid migration between multiple SMT for instance) before
> binding a task (calling set_cpubind). It's useless in your code above.
>
> Brice
>
> [1] http://www.open-mpi.org/community/lists/hwloc-devel/2011/02/1915.php
>
>
>
>> Here is the program (attached you find the output of
>> hwloc-gather-topology.sh):
>>
>> #include <stdio.h>
>> #include <string.h>
>> #include "hwloc.h"
>> #include "mpi.h"
>>
>> int main(int argc, char* argv[]) {
>>
>> hwloc_topology_t topology;
>> hwloc_bitmap_t cpuset;
>> char *str = NULL;
>> int myid, numprocs, namelen;
>> char procname[MPI_MAX_PROCESSOR_NAME];
>>
>> MPI_Init(&argc,&argv);
>> MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
>> MPI_Comm_rank(MPI_COMM_WORLD,&myid);
>> MPI_Get_processor_name(procname,&namelen);
>>
>> printf("Process %d of %d on %s\n", myid, numprocs, procname);
>>
>> hwloc_topology_init(&topology);
>> hwloc_topology_load(topology);
>>
>> /* get native cpuset of this process */
>> cpuset = hwloc_bitmap_alloc();
>> hwloc_get_cpubind(topology, cpuset, 0);
>> hwloc_bitmap_asprintf(&str, cpuset);
>> printf("--> cpuset of process %d is %s\n", myid, str);
>> free(str);
>> hwloc_bitmap_singlify(cpuset);
>> hwloc_bitmap_asprintf(&str, cpuset);
>> printf("--> cpuset of process %d after singlify is %s\n", myid, str);
>> free(str);
>>
>> hwloc_bitmap_free(cpuset);
>> hwloc_topology_destroy(topology);
>>
>> MPI_Finalize();
>> return 0;
>> }