On a Torque system, your job is typically started on a backend node.
Thus, you need to have the Torque libraries installed on those nodes -
or else build OMPI static, as you found.
I have never tried --enable-mca-static, so I have no idea if this
works or what it actually does. If I want static, I just build the
entire tree that way.
If you want to run dynamic, though, you'll have to make the Torque
libs available on the backend nodes.
On Jan 29, 2009, at 8:32 AM, Kiril Dichev wrote:
> I am trying to run with Open MPI 1.3 on a cluster using PBS Pro:
> pbs_version = PBSPro_18.104.22.168361
> However, after compiling with these options:
> intel10.1-64bit-dynamic-threads CC=/opt/intel/cce/10.1.015/bin/icc
> CXX=/opt/intel/cce/10.1.015/bin/icpc CPP="/opt/intel/cce/10.1.015/
> bin/icc -E" FC=/opt/intel/fce/10.1.015/bin/ifort F90=/opt/intel/fce/
> 10.1.015/bin/ifort F77=/opt/intel/fce/10.1.015/bin/ifort --enable-
> mpi-f90 --with-tm=/usr/pbs/ --enable-mpi-threads=yes --enable-
> I get runtime errors when running on more than one reserved node
> even /bin/hostname:
> dynamic-threads/bin/mpirun -np 5 /bin/hostname
> dynamic-threads/bin/mpirun: symbol lookup error: /home_nfs/parma/
> lib/openmpi/mca_plm_tm.so: undefined symbol: tm_init
> When running on one node only, I don't get this error.
> Now, I see that I only have static PBS libraries so I tried to compile
> this component statically. I added to the above configure:
> However, nothing changed. The same errors occurr.
> But if I compile Open MPI only with static libraries ("--enable-static
> --disable-shared"), the MPI (or non-MPI) programs run OK.
> Can you help me here ?
> Dipl.-Inf. Kiril Dichev
> Tel.: +49 711 685 60492
> E-mail: dichev_at_[hidden]
> High Performance Computing Center Stuttgart (HLRS)
> Universität Stuttgart
> 70550 Stuttgart
> users mailing list