Hmmm....well, the fact is that OpenMPI may not support LSF in the sense you
may be asking. As you have noticed, we don't actually read LSF enviro
variables to get the allocated node info nor interface to LSF for launch.
Obviously, as you have done, there are ways to work around that limitation.
We don't pass the full environment on to the nodes as (in most cases) that
can cause problems (e.g., with the DISPLAY value).
I have been asked about providing native LSF support and hope to get to that
in the not-too-distant future, but have no access to an LSF machine to
verify operation (I may have a cooperative user, though, who will test for
me - I would welcome another!).
On 3/12/07 1:52 PM, "Michael" <mklus_at_[hidden]> wrote:
> What is the status of LSF and OpenMPI?
> I'm running on a major HPC system using GM & LSF and we have to use a
> number of workarounds so that we can use OpenMPI. Specifically,
> using the scripts on this system we have to have our csh file source
> a file to set up the environment on the nodes. Using OpenMPI's
> mpirun directly does not work because at the very minimum the hosts
> to run on are not available to it, I had a work around but there it
> seems that the environment is not passed to the nodes.
> The notes from the support people indicate that the problem is that
> openmpi's mpirun command doesn't recognize the "-gm-copy-env"
> option. Does this mean anything to anyone?
> Open MPI: 1.1.2
> Open MPI SVN revision: r12073
> MCA btl: self (MCA v1.0, API v1.0, Component v1.1.2)
> MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.2)
> MCA btl: gm (MCA v1.0, API v1.0, Component v1.1.2)
> MCA btl: mvapi (MCA v1.0, API v1.0, Component v1.1.2)
> MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
> Have there been any improvements in the compatibility of OpenMPI with
> LSF since version 1.1.2?
> Does anyone in the OpenMPI team have access to a system using the LSF
> batch queueing system? Is an machine with Gm and LSF different yet?
> users mailing list