My guess is that you aren't doing the allocation correctly - since you are using qsub, can I assume you have Moab as your scheduler?

aprun should be forwarding the envars - do you see them if you just run "aprun -n 1 printenv"?

On Nov 23, 2013, at 2:13 PM, Teranishi, Keita <> wrote:


I installed OpenMPI on our small XE6 using the configure options under /contrib directory.  It appears it is working fine, but it ignores MCA parameters (set in env var).  So I switched to mpirun (in OpenMPI) and it can handle MCA parameters somehow.  However,  mpirun fails to allocate process by cores.  For example, I allocated 32 cores (on 2 nodes) by "qsub –lmppwidth=32 –lmppnppn=16", mpirun recognizes it as 2 slots.    Is it possible to mpirun to handle mluticore nodes of XE6 properly or is there any options to handle MCA parameters for aprun?

Keita Teranishi
Principal Member of Technical Staff
Scalable Modeling and Analysis Systems
Sandia National Laboratories
Livermore, CA 94551
+1 (925) 294-3738

users mailing list