Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OpenMPI runtime-specific environment variable?
From: Ralph Castain (rhc_at_[hidden])
Date: 2008-10-21 13:54:26


On Oct 21, 2008, at 11:35 AM, Reuti wrote:

> Hi,
>
> Am 21.10.2008 um 18:52 schrieb Ralph Castain:
>
>> On Oct 21, 2008, at 10:37 AM, Adams, Brian M wrote:
>>
>>>> We do have some environmental variables that we guarantee to
>>>> be "stable" across releases. You could look for
>>>> OMPI_COMM_WORLD_SIZE, or OMPI_UNIVERSE_SIZE (there are a
>>>> couple of others as well, but any of these would do).
>>>
>>> Q: I just wrote a simple C++ program, including mpi.h and getenv
>>> to check for these two variables and compiled with the mpicxx
>>> wrapper (openmpi-1.2.5 as distributed with RHEL5). When running
>>> this program with orterun, these variables come back NULL from the
>>> environment. The same is true if I just orterun a shell script to
>>> dump the environment to a file. Am I making an obvious mistake
>>> here?
>>
>> Crud - forgot you are using the old 1.2 series. No, we don't have
>> any good variables for you to use there. You might consider
>> updating to 1.3 (beta should come out soon) to get something
>> stable. Otherwise, you're kinda stuck with the OMPI-internal ones,
>> so you'll have to be prepared to make a change should anyone try to
>> use it with 1.3 or higher as we go forward.
>>
>> If you absolutely have to do this with 1.2, your best bet is
>> probably OMPI_MCA_universe as the others are even worse (many are
>> gone in 1.3).
>
> there is no MPI_GET_VENDOR or whatever in the MPI standard to get
> the used version? I checked the MPI docs but couldn't see anything
> like this. I would have thought, that such a thing is foreseen. That
> would mean, that a string with the version is inside the binary (or
> shared-library).

I don't believe there is a standard as you suggest, but there
certainly is a string in OMPI (OMPI_VERSION) that tells you the version.

>
>
>
>>> Doug is right that we could use an additional command line flag to
>>> indicate MPI runs, but at this point, we're trying to hide that
>>> from the user, such that all they have to do is run the binary vs.
>>> orterun/mpirun the binary and we detect whether it's a serial or
>>> parallel run.
>
> And when you have this information you decide for your user, whether
> to use mpirun (and the correct version to use) or just the plain
> binary?
>
> You are making something like "strings the_binary" and grep for
> indications of the compilation type? For the standard Open MPI with
> shared libraries a "ldd the_binary" might reveal some information.
>
> -- Reuti
>
>
>>> As for parsing the command line $argv[0] before MPI_Init, I don't
>>> think it will help here. While MPICH implementations typically
>>> left args like -p4pg -p4amslave on the command line, I don't see
>>> that coming from OpenMPI-launched jobs.
>>
>> Really? That doesn't sound right - we don't touch the arguments to
>> your application. We test that pretty regularly and I have always
>> seen the args come through.
>>
>> Can you provide an example of where it isn't?
>>
>> Ralph
>>
>>>
>>>
>>> Brian
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users