Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] ompi_info
From: George Bosilca (bosilca_at_[hidden])
Date: 2013-07-18 08:46:25

On Jul 17, 2013, at 20:15 , "Jeff Squyres (jsquyres)" <jsquyres_at_[hidden]> wrote:

> On Jul 17, 2013, at 12:16 PM, Nathan Hjelm <hjelmn_at_[hidden]> wrote:
>> As Ralph suggested you need to pass the --level or -l option to see all the variables. --level 9 will print everything. If you think there are variables everyday users should see you are welcome to change them to OPAL_INFO_LVL_1. We are trying to avoid moving too many variables to this info level.
> I think George might have a point here, though. He was specifically asking about the --all option, right?
> I think it might be reasonable for "ompi_info --all" to actually show *all* MCA params (up through level 9).

Thanks Jeff,

I'm totally puzzled by the divergence in opinion in this community on the word ALL. ALL like in "every single one of them", not like in "4 poorly chosen MCA arguments that I don't even know how to care about".

> Thoughts?

Give back to the word ALL it's original meaning: "the whole quantity or extent of a group".

>>> Btw, something is wrong i the following output. I have an "btl = sm,self" in my .openmpi/mca-params.conf so I should not even see the BTL TCP parameters.
>> I think ompi_info has always shown all the variables despite what you have the selection variable set (at least in some cases). We now just display everything in all cases. An additional benefit to the updated code is that if you set a selection variable through the environment (OMPI_MCA_btl=self,sm) it no longer appears as unset in ompi_info. The old code unset all selection variables in order to ensure all parameters got printed (very annoying but necessary).

Ralph comment above is not accurate. Prior to this change (well the one from few weeks ago), explicitly forbidden components did not leave traces in the MCA parameters list. I validate this with the latest stable.

> Yes, I think I like this new behavior better, too.
> Does anyone violently disagree?

Yes. This behavior means the every single MPI process out there will 1) load all existing .so components, and 2) will give them a chance to leave undesired traces in the memory of the application. So first we generate an increased I/O traffic, and 2) we use memory that shouldn't be used. We can argue about the impact of all this, but from my perspective what I see is that Open MPI is doing it when explicit arguments to prevent the usage of these component were provided.


> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> _______________________________________________
> devel mailing list
> devel_at_[hidden]