Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Development mailing list

Subject: Re: [hwloc-devel] Fwd: BGQ empty topology with MPI
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2012-03-26 02:14:58


Le 26/03/2012 05:16, Christopher Samuel a écrit :
> On 25/03/12 09:04, Daniel Ibanez wrote:
>
> > Additional printfs confirm that with MPI in the code,
> > hwloc_accessat succeeds on the various /sys/ directories, but the
> > overall procedure for getting PUs from these fails. Without MPI,
> > access to /sys/ directories fails but the fallback
> > hwloc_setup_pu_level works.
>
> Sounds like your I/O with MPI is getting redirected to the I/O node
> (and hence finding /sys from the Linux kernel there) but when you're
> running without MPI it's trying to open files on the compute node and
> the CNK isn't presenting the /sys directories, causing it to fall back.
>
> I've run lstopo on our BG/P and I get to see the 4 cores there whether
> it's the stock code or if I add an MPI_Init() to the start. The
> output from lstopo when built with --enable-debug confirms it's
> reporting kernel and hostname info from the I/O node associated with
> the block:
>
> Machine#0(Backend=Linux OSName=CNK OSRelease=2.6.16.60-304 OSVersion=1
> HostName=r00-m1-n04.pcf.vlsci.unimelb.edu.au Architecture=BGP) [...]

Thanks, that would explain such a strange behavior.

For the record, you can run "lstopo -v" or even "lstopo -.xml" to get
more info, especially machine attributes.

Brice