Le 07/08/2012 13:06, Jeff Squyres a écrit :
>> Aside from the main "discover" callback, backends may also define some
>> callbacks to be invoked when new object are created. The main example is
>> Linux creating "OS devices" when a new "PCI device" is added by the PCI
>> backend. CUDA could use that too to fill GPU PCI devices. This is not
>> strictly needed since adding these devices could still be done later,
>> once the PCI backend is done. We'll see.
> This is a nifty idea. Is the idea that callback can be registered to be fired when a specific PCI vendor / device ID are found?
I am not sure yet. Linux would use the callback for some classes of
devices. CUDA for some vendor ID. We could do the general case (all
object types) and have the callback check the device attribute, but it
could be overkill.
>> Instead of allowing random API calls into plugin internals, we could
>> keep these backends internal, i.e. not making them plugins. At least for
>> OS backends, it makes sense. "synthetic" and "custom" also have no
>> reason to be pluginified either, they depend on nothing.
> It might be nice to view all plugins as the same -- regardless of whether they are internal (i.e., part of libhwloc) or external (i.e., a standalone DSO). That way, the majority of the core code doesn't have to know/care whether plugins are internal or external.
> It would also allow slurping external plugins to be internal, which will be fairly important for embedded mode. A specific case which has come up for this multiple times is when higher-level MPI bindings packages (e.g., Python) dlopen libmpi into a private namespace. When OMPI then tries to dlopen its own DSO/external plugins, they can't find the symbols in libmpi that they depend on (because libmpi is in a private namespace). Hence, OMPI has to be built in a slurp-all-plugins-to-be-internal mode to support such configurations.
> As such, we'll need hwloc to also support this slurp-all-plugins-to-be-internal kind of mode, too.
> I can help with the build mojo for this, if desired.
I don't enough about all this so we'll your help for sure :) We'll see
once backends are cleaned up.