Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Brian Barrett (brbarret_at_[hidden])
Date: 2006-03-01 00:23:14


On Feb 28, 2006, at 11:04 PM, Durga Choudhury wrote:

> We are not using Irix but Linux as the operating system. The
> config.guess script identifies the system as mips64-unknown-gnu-
> linux. I guess it identifies the platform as "unknwon" because it
> is all propritary, home built hardware.

Yeah, the second field seems to generally be something generic like
"unknown" or "pc" with Linux. All the OMPI configure script ever
looks at is the first and last fields (the processor architecture and
OS).

> Now about netpipe, you are both right and wrong. You are
> absolutely right that netpipe does not like more that 2 processes
> (it kills itself). Fortunately, I only have 2 boards in my test
> cluster so that is not a problem. And openMPI does spawn 2 copies
> of the netpipe on the two boards, I have verified it by doing a "ps
> -ef" on both boards and seeing the process running. However, I used
> mpiexec instead of mpirun to create the processes. My question is
> (this is something I have always wondered) what is the difference
> between mpirun and mpiexec?

With Open MPI, absolutely nothing. If you notice, they are both
symlinks to something called orterun, which is Open MPI's job startup
application. The reason for their existence is historical. MPI-1
did not specify how processes were started up, but many
implementations ended up with an mpirun command, and each
implementation had a different command line usage. MPI-2 added
mpiexec as an attempt to provide a uniform job startup script. They
called it mpiexec because mpirun was already used by so many
implementations and it was impossible to unify all the command line
options.

Open MPI provides mpiexec because that's what the standard says we
should do. We implemented the mpiexec syntax (plus some things that
we figured users would want to do). Since everyone expects us to
have an mpirun, but we had some flexibility in our command line
syntax, we just made it the same as mpirun. In other MPI
implementations (like LAM/MPI, for example), this is not the case and
they have slightly different semantics.

> I will run it thru the debugger tomorrow and let you know of the
> outcome.

Hopefully that will shed some light on the problem.

Brian

-- 
   Brian Barrett
   Open MPI developer
   http://www.open-mpi.org/