> -----Original Message-----
> From: users-bounces_at_[hidden]
> [mailto:users-bounces_at_[hidden]] On Behalf Of semper
> Sent: Thursday, April 20, 2006 9:50 PM
> To: users_at_[hidden]
> Subject: Re: [OMPI users] OpenMPI and SLURM configuration ??
> > No, the location of $HOME should not matter.
> > What happens if you "mpirun -np 2 uptime"? (i.e., use
> mpirun to launch
> > a non-MPI application)
> it returns right result! but still only 2 local processes.
So this is *inside* a SLURM job? I.e., in "srun -N 2 -A"? What does
"env | grep SLURM" show?
Note that launching N processes on a single node is the default behavior
for Open MPI if you don't specify a hostfile or no hosts are implicitly
specified by a resource manager (e.g., you're running in a shell that is
not under the control of SLURM).
> I tried again to add a hostfile option "--hostfile
> $HOME/openmpi/bin/hostfile" to
> mpirun, with the hostfile containning two items: IA64_node0
> and IA64_node1 ,but
> this time it complained:
> [time returned by IA64_node0]
> bash line 1: orted : command not found
As noted by someone else, you need to have Open MPI installed and
available in your PATH on all nodes. See the FAQ:
> cluster. The 4 nodes can "ssh" or "rsh" login in each other
> without password
When using SLURM, rsh/ssh are not used to startup jobs -- SLURM's native
interface is used.
Server Virtualization Business Unit