Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified
From: Ralph Castain (rhc_at_[hidden])
Date: 2013-01-24 09:52:52


How did you configure OMPI? If you add --display-allocation to your cmd line, does it show all the nodes?

On Jan 24, 2013, at 6:34 AM, Sabuj Pattanayek <sabujp_at_[hidden]> wrote:

> Hi,
>
> I'm submitting a job through torque/PBS, the head node also runs the
> Moab scheduler, the .pbs file has this in the resources line :
>
> #PBS -l nodes=2:ppn=4
>
> I've also tried something like :
>
> #PBS -l procs=56
>
> and at the end of script I'm running :
>
> mpirun -np 8 cat /dev/urandom > /dev/null
>
> or
>
> mpirun -np 56 cat /dev/urandom > /dev/null
>
> ...depending on how many processors I requested. The job starts,
> $PBS_NODEFILE has the nodes that the job was assigned listed, but all
> the cat's are piled onto the first node. Any idea how I can get this
> to submit jobs across multiple nodes? Note, I have OSU mpiexec working
> without problems with mvapich and mpich2 on our cluster to launch jobs
> across multiple nodes.
>
> Thanks,
> Sabuj
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users