Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] custom modules per job (PBS/OpenMPI/environment-modules)
From: David Singleton (David.Singleton_at_[hidden])
Date: 2009-11-17 05:29:16


Hi Michael,

I'm not sure why you dont see Open MPI behaving like other MPI's w.r.t.
modules/environment on remote MPI tasks - we do.

xe:~ > qsub -q express -lnodes=2:ppn=8,walltime=10:00,vmem=2gb -I
qsub: waiting for job 376366.xepbs to start
qsub: job 376366.xepbs ready

[dbs900_at_x27 ~]$ module load openmpi
[dbs900_at_x27 ~]$ mpirun -n 2 --bynode hostname
x27
x28
[dbs900_at_x27 ~]$ mpirun -n 2 --bynode env | grep FOO
[dbs900_at_x27 ~]$ setenv FOO BAR
[dbs900_at_x27 ~]$ mpirun -n 2 --bynode env | grep FOO
FOO=BAR
FOO=BAR
[dbs900_at_x27 ~]$ mpirun -n 2 --bynode env | grep amber
[dbs900_at_x27 ~]$ module load amber
[dbs900_at_x27 ~]$ mpirun -n 2 --bynode env | grep amber
LOADEDMODULES=openmpi/1.3.3:amber/9
PATH=/apps/openmpi/1.3.3/bin:/home/900/dbs900/bin:/bin:/usr/bin::/opt/bin:/usr/X11R6/bin:/opt/pbs/bin:/sbin:/usr/sbin:/apps/amber/9/exe
_LMFILES_=/apps/Modules/modulefiles/openmpi/1.3.3:/apps/Modules/modulefiles/amber/9
AMBERHOME=/apps/amber/9
LOADEDMODULES=openmpi/1.3.3:amber/9
PATH=/apps/openmpi/1.3.3/bin:/home/900/dbs900/bin:/bin:/usr/bin:/opt/bin:/usr/X11R6/bin:/opt/pbs/bin:/sbin:/usr/sbin:/apps/amber/9/exe
_LMFILES_=/apps/Modules/modulefiles/openmpi/1.3.3:/apps/Modules/modulefiles/amber/9
AMBERHOME=/apps/amber/9

David

Michael Sternberg wrote:
> Dear readers,
>
> With OpenMPI, how would one go about requesting to load environment modules (of the http://modules.sourceforge.net/ kind) on remote nodes, augmenting those normally loaded there by shell dotfiles?
>
>
> Background:
>
> I run a RHEL-5/CentOS-5 cluster. I load a bunch of default modules through /etc/profile.d/ and recommend to users to customize modules in ~/.bashrc. A problem arises for PBS jobs which might need job-specific modules, e.g., to pick a specific flavor of an application. With other MPI implementations (ahem) which export all (or judiciously nearly all) environment variables by default, you can say:
>
> #PBS ...
>
> module load foo # not for OpenMPI
>
> mpirun -np 42 ... \
> bar-app
>
> Not so with OpenMPI - any such customization is only effective for processes on the master (=local) node of the job, and any variables changed by a given module would have to be specifically passed via mpirun -x VARNAME. On the remote nodes, those variables are not available in the dotfiles because they are passed only once orted is live (after dotfile processing by the shell), which then immediately spawns the application binaries (right?)
>
> I thought along the following lines:
>
> (1) I happen to run Lustre, which would allow writing a file coherently across nodes prior to mpirun, and thus hook into the shell dotfile processing, but that seems rather crude.
>
> (2) "mpirun -x PATH -x LD_LIBRARY_PATH …" would take care of a lot, but is not really general.
>
> Is there a recommended way?
>
>
> regards,
> Michael
>