Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Open MPI 1.2.4 verbosity w.r.t. osc pt2pt
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-12-12 08:40:56

On Dec 11, 2007, at 9:08 AM, Lisandro Dalcin wrote:

>> (for a nicely-formatted refresher of the issues, check out
> Sorry for the late response...
> I've finally 'solved' this issue by using RTLD_GLOBAL for loading the
> Python extension module that actually calls MPI_Init(). However, I'm
> not completelly sure if my hackery is completelly portable.
> Looking briefly at the end of the link to the wiki page, you say that
> if the explicit linking to libmpi on componets is removed, then
> dlopen() has to be explicitelly called.


> Well, this would be a mayor headhace for me, because portability
> issues. Please note that I've developed mpi4py on a rather old 32 bits
> linux box, but it works in many different plataforms and OS's. I do
> really do not have the time of testing and figure out how to
> appropriatelly call dlopen() in platforms/OS's that I even do not have
> access!!

Yes, this is problematic; dlopen is fun on all the various OS's...

FWIW: we use the Libtool DL library for this kind of portability; OMPI
itself doesn't have all the logic for the different OS loaders.

> Anyway, perhaps OpenMPI could provide an extension: a function call,
> let say 'ompi_load_dso()' or something like that, that can be called
> before MPI_Init() for setting-up the monster. What to you think about
> this? Would it be hard for you?

(after much thinking...) Perhaps a better solution would be an MCA
parameter: if the logical "mca_do_dlopen_hackery" (or whatever) MCA
parameter is found to be true during the very beginning of MPI_INIT
(down in the depths of opal_init(), actually), then we will
lt_dlopen[_advise]("<path>/libmpi"). For completeness, we'll do the
corresponding dlclose in opal_finalize(). I need to think about this
a bit more and run it by Brian Barrett... he's quite good at finding
holes in these kinds of complex scenarios. :-)

This should hypothetically allow you to do a simple putenv() before
calling MPI_INIT and then the Right magic should occur.

Jeff Squyres
Cisco Systems