Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Problems with program-execution with OpenMPI: Orted: command not found
From: Doug Reeder (dlr_at_[hidden])
Date: 2008-04-25 12:10:45


Using the modules software (modules.sourceforge.net) works pretty
well for managing multiple mpi flavors. You still need to put each
mpi flavor's bin, lib, and include files in uniquely named paths,
then modules lets you put the appropriate mpi flavor in your path for
each application. It takes a little effort to set modules up but then
it is easy to use. It is particularly helpful for users who aren't
good at editing .xxrc files.

Doug Reeder
On Apr 25, 2008, at 8:46 AM, Jeff Squyres wrote:

> On Apr 25, 2008, at 10:54 AM, Hans Wurst wrote:
>
>>> So you'll need to
>>> compile your benchmarks for each MPI implementation that you want to
>>> test (i.e., use that MPI's wrapper compilers to compile them).
>>
>> I'm not conscious about what a MPI wrapper compiler is and how it
>> works.
>
> mpicc (et al.) are the "wrapper" compilers. So you compile your app
> with:
>
> mpicc my_mpi_app.c -o my_mpi_app
>
> All the "wrapper" compilers do is add in the relevant compiler /
> linker flags and invoke the underlying compiler. See:
>
> http://www.open-mpi.org/faq/?category=mpi-apps#general-build
> http://www.open-mpi.org/faq/?category=mpi-apps#cant-use-wrappers
>
>> Maybe we can discuss this with a little example:
>>
>> mpptest requires the current installation path of the MPI-
>> implementation before compiling. When I switch between MPI-
>> implementations, do I have to re-compile the benchmark each time? If
>> not, how do I handle that issue? How do I keep the two compiled
>> executables separate?
>
> In general, yes, you need to have different executables for each MPI
> implementation (likely compiled with their wrapper compilers, or
> whatever compiler/linker flags may be required for that MPI
> implementation).
>
> I don't recall mpptest's build system offhand; a simple solution would
> be to have multiple copies of the mpptest software and build them each
> for a single MPI implementation.
>
> Another option is to build for one, rename the output exectuable
> (E.g., "mv mpptest mpptest.openmpi"), and then repeat as desired.
>
>>> The --prefix option (and friends) make the ssh/rsh command line much
>>> more complex, effectively setting PATH and LD_LIBRARY_PATH for
>>> you on
>>> each remote machine before launching orted.
>>
>> OK, I tried that and it works great. Knowing that, I've got one more
>> question regarding different MPI-Implementations on one node. What
>> is the smartest way to switch between them?
>> Changing the PATH's in the .bashrc and rebooting the nodes? Is there
>> a smart way to do that online without reboot? Would it be possible
>> to have two separate users "MPICHuser" and "OpenMPIuser" each with
>> the PATH for the corresponding MPI-implementation´, and launching
>> the processes for the different implementations with these separate
>> users?
>
>
> No, you likely don't need anything so elaborate (at least for Open
> MPI).
>
> If you use the full/absolute path name to Open MPI, it'll effect
> the --
> prefix functionality for you. Keep in mind that --prefix
> functionality means that you don't need any setup on the remote node
> (nothing in your .bashrc, etc.). So you can:
>
> /path/to/first/openmpi/bin/mpirun ....
> /path/to/second/openmpi/bin/mpirun ....
>
> This will use the two different Open MPI installations. Or, if you
> use the --enable-mpirun-prefix-by-default option when configuring/
> building Open MPI, then you can use something like environment modules
> to switch between different MPI implementations (http://
> modules.sf.net/) and set them in your PATH. Then just [Open MPI's]
> "mpirun" will do the Right thing by default, perhaps something like
> this:
>
> module load openmpi/1.2.5
> mpirun ....
> module unload openmpi
> module load openmpi/1.2.6
> mpirun ....
>
> And so on.
>
> For MPICH, I don't know exactly what they require in terms of remote
> path setup, etc. I don't know if it has --prefix like functionality,
> or if it automatically does it (like OMPI's --enable-mpirun-prefix-by-
> default), etc. You'll need to check their docs.
>
> --
> Jeff Squyres
> Cisco Systems
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users