This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Thanks for your resply. My problem was actually caused by having include
mpif.h in the code still, rather than use mpi. But the info about
NSLOTS, etc is good to know.
P.S. I had originally used $MPI_DIR in mpirun call, but changed it to
the explicit directory in the course of trying fix the problem.
On Thu, 2011-04-07 at 11:00 -0400, Prentice Bisbal wrote:
> On 04/06/2011 07:09 PM, Jason Palmer wrote:
> > Hi,
> > I am having trouble running a batch job in SGE using openmpi. I have read
> > the faq, which says that openmpi will automatically do the right thing, but
> > something seems to be wrong.
> > Previously I used MPICH1 under SGE without any problems. I'm avoiding MPICH2
> > because it doesn't seem to support static compilation, whereas I was able to
> > get openmpi to compile with open64 and compile my program statically.
> > But I am having problems launching. According to the documentation, I should
> > be able to have a script file, qsub.sh:
> > #!/bin/bash
> > #$ -cwd
> > #$ -j y
> > #$ -S /bin/bash
> > #$ -q all.q
> > #$ -pe orte 18
> > MPI_DIR=/home/jason/openmpi-1.4.3-install/bin
> > /home/jason/openmpi-1.4.3-install/bin/mpirun -np $NSLOTS myprog
> If you have SGE integration, you should not specify the number of slots
> requested on the command-line. Open MPI will speak directly to SGE (or
> vice versa, to get this information.
> Also, what is the significance of specifying MPI_DIR? I think want to
> add that to your PATH, and then export it to the rest of the nodes by
> using the -V switch to qsub. If the correct mpirun isn't found first in
> your PATH, your job will definitely fail when launched on the slave hosts.
> You also should add the path to the MPI libraries to your LD_LIBRARY
> PATH, too, or else you'll endup with run-time linking problems.
> For example, I would change your submission script to look like this:
> #$ -cwd
> #$ -j y
> #$ -S /bin/bash
> #$ -q all.q
> #$ -pe orte 18
> #$ -V
> export PATH=$MPI_DIR/bin:$PATH
> export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
> mpirun myprog
> This may not fix all your problems, but will definitely fix some of them.