Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] OpenMPI runtime-specific environment variable?
From: Reuti (reuti_at_[hidden])
Date: 2008-10-21 13:35:50


Am 21.10.2008 um 18:52 schrieb Ralph Castain:

> On Oct 21, 2008, at 10:37 AM, Adams, Brian M wrote:
>>> We do have some environmental variables that we guarantee to
>>> be "stable" across releases. You could look for
>>> couple of others as well, but any of these would do).
>> Q: I just wrote a simple C++ program, including mpi.h and getenv
>> to check for these two variables and compiled with the mpicxx
>> wrapper (openmpi-1.2.5 as distributed with RHEL5). When running
>> this program with orterun, these variables come back NULL from the
>> environment. The same is true if I just orterun a shell script to
>> dump the environment to a file. Am I making an obvious mistake here?
> Crud - forgot you are using the old 1.2 series. No, we don't have
> any good variables for you to use there. You might consider
> updating to 1.3 (beta should come out soon) to get something
> stable. Otherwise, you're kinda stuck with the OMPI-internal ones,
> so you'll have to be prepared to make a change should anyone try to
> use it with 1.3 or higher as we go forward.
> If you absolutely have to do this with 1.2, your best bet is
> probably OMPI_MCA_universe as the others are even worse (many are
> gone in 1.3).

there is no MPI_GET_VENDOR or whatever in the MPI standard to get the
used version? I checked the MPI docs but couldn't see anything like
this. I would have thought, that such a thing is foreseen. That would
mean, that a string with the version is inside the binary (or shared-

>> Doug is right that we could use an additional command line flag to
>> indicate MPI runs, but at this point, we're trying to hide that
>> from the user, such that all they have to do is run the binary vs.
>> orterun/mpirun the binary and we detect whether it's a serial or
>> parallel run.

And when you have this information you decide for your user, whether
to use mpirun (and the correct version to use) or just the plain binary?

You are making something like "strings the_binary" and grep for
indications of the compilation type? For the standard Open MPI with
shared libraries a "ldd the_binary" might reveal some information.

-- Reuti

>> As for parsing the command line $argv[0] before MPI_Init, I don't
>> think it will help here. While MPICH implementations typically
>> left args like -p4pg -p4amslave on the command line, I don't see
>> that coming from OpenMPI-launched jobs.
> Really? That doesn't sound right - we don't touch the arguments to
> your application. We test that pretty regularly and I have always
> seen the args come through.
> Can you provide an example of where it isn't?
> Ralph
>> Brian
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]