Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Georg Wassen (wassen_at_[hidden])
Date: 2007-06-28 03:49:31


Hello,

> FWIW: the reason you have to use PML_CALL() is by design. The MPI
> API has all the error checking stuff for ensuring that MPI_INIT
> completed, error checking of parameters, etc. We never invoke the
> top-level MPI API from elsewhere in the OMPI code base (except for
> from within ROMIO; we didn't want to wholesale changes to that
> package because it would make for extreme difficulty every time we
> imported a new version). There's fault tolerance reasons why it's
> not good to call back up to the top level MPI API, too.

ok, this is obvious. The PML_CALL() works, but the OpenIB problem (also
discovered without component in a regular MPI program) makes them very
slow and they fail with too many processes.

> But I agree with Andrew; if this is init-level stuff that is not
> necessary to be exchanged on a per-communicator basis, then the modex
> is probably your best bet. Avoid using the RML directly if possible.

I'm now using OOB during module-init. When the OpenIB issue will be
fixed, I'll try to switch back to PML_SEND(). The use of Modex would
require an architectural change...

Thanks for your help!
Georg.