Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Doolittle, Joshua (joshua.doolittle_at_[hidden])
Date: 2006-06-15 10:13:26

That was exactly the problem I was having. We were use to doing it one
way, with a certain other MPI library. After digging around and not
doing it "the way it has always been done", I actually read the slurm
quickstart page. That fixed all my problems. But thank you all for
your help.

- Joshua Doolittle
- Intern 2
- (509) 376-3958
- EMSL High Perf. Computing

-----Original Message-----
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
Behalf Of Brian Barrett
Sent: Thursday, June 15, 2006 6:39 AM
To: Open MPI Users
Subject: Re: [OMPI users] Trouble with open MPI and Slurm

On Wed, 2006-06-14 at 10:05 -0700, Doolittle, Joshua wrote:
> I am running Open MPI version 1.0.2 and slurm 1.1.0. I can run slurm
> jobs, and I can run mpi jobs. However, when I run a mpi job in slurm
> batch mode with 4 processes, the processes do not talk to each other.
> They act like they are the only process. I'm running these in slurm
> batch mode. The job that I'm running is a simple mpi optimized hello
> world. I'm running these on an opteron (x86_64) blade system from a
> head node. Any help would be greatly appreciated.

How are you running your batch job? Unlike some MPI implementations,
Open MPI jobs can not be started under SLURM without the use of mpirun.
You can either run mpirun under an interactive session:

   srun -N 4 -A
   mpirun -np 4 ./foobar

or from a batch script:

   echo "mpirun -np 4 ./foobar" >
   chmod +x
   srun -N 4 -b

But you can't submit your application directly without mpirun. This is
a feature we would like to support in the future, but there are some
licensing issues (we would have to link with their GPL'ed libraries,
which wouldn't work so well for us).


   Brian Barrett
   Open MPI developer
users mailing list