Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Development mailing list

Subject: Re: [hwloc-devel] nvidia and nouveau driver differences
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2013-05-03 18:04:49

Le 03/05/2013 23:42, Guy Streeter a écrit :
> On 05/03/2013 04:13 PM, Brice Goglin wrote:
>> If I remember correctly, NVIDIA or AMD proprietary drivers cannot use
>> the kernel sysfs API because it's GPL-only. They can't create devices in
>> sysfs, that's why hwloc doesn't get any GPU OS device with NVIDIA.
> That sounds right and makes sense.
>> card* and controlD64 is what we get with opensource DRM drivers that use
>> the sysfs/drm kernel API. But I don't expect people to do much with them
>> as long as there's no way for an application to know if it's using card0
>> or card1. That's why there's a NVIDIA specific plugin using NVCtrl: you
>> give a display such as :0.0, it returns the locality of the PCI device
>> running it.
>> Brice
> Can you give me an example of something that should show the display device
> when the Nvidia driver is loaded? I think I properly configured hwloc:
> -----------------------------------------------------------------------------
> Hwloc optional build support status (more details can be found above):
> Probe / display I/O devices: PCI GL
> Graphical output (Cairo): yes
> XML input / output: full
> libnuma memory support: yes
> Plugin support: no
> -----------------------------------------------------------------------------

"GL" is indeed what you need above.

You should get something like this:

    HostBridge L#0
        PCI 10de:06d1
          GPU L#0 ":0.0"
        PCI 10de:06d1
          GPU L#1 ":0.3"

Once the NVIDIA driver is loaded and the X server is running, make sure
your application has access to the X server. If you're not running
lstopo from within the X server (I have never actually tested this
case), you may need things like "xhost +" and/or "chmod 666 /dev/nvidia*".