Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3 (Tim Prince)
From: Ralph Castain (rhc_at_[hidden])
Date: 2011-03-22 09:44:53

On a beowulf cluster? So you are using bproc?

If so, you have to use the OMPI 1.2 series - we discontinued bproc support at the start of 1.3. Bproc will take care of the envars.

If not bproc, then I assume you will use ssh for launching? Usually, the environment is taken care of by setting up your .bashrc (or equiv for your shell) on the remote nodes (which usually have a shared file system so all binaries are available on all nodes).

On Mar 22, 2011, at 7:00 AM, yanyg_at_[hidden] wrote:

> Thank you very much for the comments and hints. I will try to
> upgrade our intel compiler collections. As for my second issue,
> with open mpi, is there any way to propagate enviroment variables
> of the current process on the master node to other slave nodes,
> such that orted daemon could run on slave nodes too?
> Thanks,
> Yiguang
>> On 3/21/2011 5:21 AM, yanyg_at_[hidden] wrote:
>>> I am trying to compile our codes with open mpi 1.4.3, by intel
>>> compilers 8.1.
>>> (1) For open mpi 1.4.3 installation on linux beowulf cluster, I use:
>>> ./configure --prefix=/home/yiguang/dmp-setup/openmpi-1.4.3
>>> CC=icc
>>> CXX=icpc F77=ifort FC=ifort --enable-static LDFLAGS="-i-static -
>>> static-libcxa" --with-wrapper-ldflags="-i-static -static-libcxa"
>>> 2>&1 | tee config.log
>>> and
>>> make all install 2>&1 | tee install.log
>>> The issue is that I am trying to build open mpi 1.4.3 with intel
>>> compiler libraries statically linked to it, so that when we run
>>> mpirun/orterun, it does not need to dynamically load any intel
>>> libraries. But what I got is mpirun always asks for some intel
>>> library(e.g. if I do not put intel library path on
>>> library search path($LD_LIBRARY_PATH). I checked the open mpi user
>>> archive, it seems only some kind user mentioned to use
>>> "-i-static"(in my case) or "-static-intel" in ldflags, this is what
>>> I did, but it seems not working, and I did not get any confirmation
>>> whether or not this works for anyone else from the user archive.
>>> could anyone help me on this? thanks!
>> If you are to use such an ancient compiler (apparently a 32-bit one),
>> you must read the docs which come with it, rather than relying on
>> comments about a more recent version. libsvml isn't included
>> automatically at link time by that 32-bit compiler, unless you specify
>> an SSE option, such as -xW. It's likely that no one has verified
>> OpenMPI with a compiler of that vintage. We never used the 32-bit
>> compiler for MPI, and we encountered run-time library bugs for the
>> ifort x86_64 which weren't fixed until later versions.
>> --
>> Tim Prince
>> ------------------------------
> _______________________________________________
> users mailing list
> users_at_[hidden]