Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Intel compiler libraries (was: libnuma issue)
From: Francesco Pietra (chiendarret_at_[hidden])
Date: 2009-04-16 11:29:14


On Thu, Apr 16, 2009 at 3:04 PM, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> I believe that Nysal was referring to
>
>  ./configure CC=icc CXX=icpc F77=ifort FC=ifort LDFLAGS=-static-intel
> --prefix=/usr

I have completely removed openmpi-1.2.3 and reinstalled in /usr/local
from source on a Tyan S2895.

>From my .bashrc:

#For intel Fortran and C/C++ compilers

. /opt/intel/fce/10.1.022/bin/ifortvars.sh
. /opt/intel/cce/10.1.022/bin/iccvars.sh

#For openmpi

if [ "$LD_LIBRARY_PATH" ] ; then
   export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
else
   export LD_LIBRARY_PATH="/usr/local/lib"
fi

===========
francesco_at_tya64:~$ echo $PATH
/opt/intel/cce/10.1.022/bin:/opt/intel/fce/10.1.022/bin:/usr/local/bin/vmd:/usr/local/chimera/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/home/francesco/hole2/exe:/usr/local/amber9/exe
francesco_at_tya64:~$
============
francesco_at_tya64:~$ echo $LD_LIBRARY_PATH
/opt/intel/mkl/10.1.2.024/lib/em64t:/opt/intel/cce/10.1.022/lib:/opt/intel/fce/10.1.022/lib:/usr/local/lib
francesco_at_tya64:~$
============
francesco_at_tya64:~$ ssh 192.168.1.33 env | sort
HOME=/home/francesco
LANG=en_US.UTF-8
LOGNAME=francesco
MAIL=/var/mail/francesco
PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games
PWD=/home/francesco
SHELL=/bin/bash
SHLVL=1
SSH_CLIENT=192.168.1.37 33941 22
SSH_CONNECTION=192.168.1.37 33941 192.168.1.33 22
USER=francesco
_=/usr/bin/env
francesco_at_tya64:~$

where 192.168.1.33 is my remote desktop in the internal network and am
launching ssh from the Tyan where openmpi has been jiust installed
(also works if i do toward another parallel computer)
==============
francesco_at_tya64:~$ ssh 192.168.1.37 date
Thu Apr 16 17:12:38 CEST 2009
francesco_at_tya64:~$

where 192.168.1.37 is the Tyan computer when I am doing with openmpi;
i.e., date passwordless shows that this computer knows also itself, as
it is true for all other computers on the internal network.
===============

Now with openmpi-1.3.1:

francesco_at_tya64:/usr/local/openmpi-1.3.1$ ./configure
CC=/opt/intel/cce/10.1.022/bin/icc
CXX=/opt/intel/cce/10.1.022/bin/icpc
F77=/opt/intel/fce/10.1.022/bin/ifort
FC=/opt/intel/fce/10.1.022/bin/ifort LDFLAGS=-static-intel
--with-libnuma=/usr --prefix=/usr/local

no warnings

# make all install

no warnings

($ and # mean user and superuser, resp)

with the connectity_c test, again the orte error: libimf.so not found.

Please notice that I am not new to openmpi. I have worked for more
than a a couple of years without any problem on these same machines
with versions 1.2.3 and 1.2.6. With the latter, when i upgraded from
debian amd64 etch to the new stable amd64 lenny, amber was still
parallelized nicely. Then I changed disks of the raid1 to larger ones,
tried to recover the previous installations of codes and found these
broken on the new OS installation. Then, everything non parallelized
was easily fixed, while with openmpi-1.3.1 (i upgraded to this) the
issues described.

As far as I have tested the OS is in order, and ssh, as shown above,
has no problem.

Given my inexperience as system analyzer, I assume that I am messing
something. Unfortunately, i was unable to discover where I am messing.
An editor is waiting completion of calculations requested by a
referee, and I am unable to answer.

thanks a lot for all you have tried to put me on the right road

francesco

>
> This method makes editing your shell startup files unnecessary for running
> on remote nodes, but you'll still need those files sourced for interactive
> use of the intel compilers and/or for running intel-compiler-generated
> executables locally.
>
> I'm guessing that you're not sourcing the intel .sh files for
> non-interactive logins.  You'll need to check your shell startup files and
> ensure that those sourcing lines are executed when you login to remote nodes
> non-interactively.  E.g.:
>
>  thisnode$ ssh othernode env | sort
>
> shows the relevant stuff in your environment on the other node.  Note that
> this is different than
>
>  thisnode$ ssh othernode
>  othernode$ env | sort
>
>
>
>
> On Apr 16, 2009, at 8:56 AM, Francesco Pietra wrote:
>
>> Did not work the way I implemented the suggestion.
>>
>> ./configure CC=/..cce..icc CXX/. cce..icpc F77=/..fce..ifort
>> FC=/..fce..ifort --with-libnuma=/usr --prefix=/usr --enable-static
>>
>> ./configure CC=/..cce..icc CXX/. cce..icpc F77=/..fce..ifort
>> FC=/..fce..ifort --with-libnuma=/usr --prefix=/usr
>> then editing Makefile by adding "LDFLAGS = -static-intel"
>>
>> ./configure CC=/..cce..icc CXX/. cce..icpc F77=/..fce..ifort
>> FC=/..fce..ifort --with-libnuma=/usr --prefix=/usr
>> then editing Makefile by replacing "LDFLAGS" with "LDFLAGS =
>> -static-intel"
>>
>> In all 3 cases orterun error: libimf.so not found (the library was
>> sourced with the *.sh intel scripts)
>>
>> francesco
>>
>> On Thu, Apr 16, 2009 at 4:43 AM, Nysal Jan <jnysal_at_[hidden]> wrote:
>> > You could try statically linking the Intel-provided libraries. Use
>> > LDFLAGS=-static-intel
>> >
>> > --Nysal
>> >
>> > On Wed, 2009-04-15 at 21:03 +0200, Francesco Pietra wrote:
>> >> On Wed, Apr 15, 2009 at 8:39 PM, Prentice Bisbal <prentice_at_[hidden]>
>> >> wrote:
>> >> > Francesco Pietra wrote:
>> >> >> I used --with-libnuma=/usr since Prentice Bisbal's suggestion and it
>> >> >> worked. Unfortunately, I found no way to fix the failure in finding
>> >> >> libimf.so when compiling openmpi-1.3.1 with intels, as you have seen
>> >> >> in other e-mail from me. And gnu compilers (which work well with
>> >> >> both
>> >> >> openmpi and the slower code of my application) are defeated by the
>> >> >> faster code of my application. With limited hardware resources, I
>> >> >> must
>> >> >> rely on that 40% speeding up.
>> >> >>
>> >> >
>> >> > To fix the libimf.so problem you need to include the path to Intel's
>> >> > libimf.so in your LD_LIBRARY_PATH environment variable. On my system,
>> >> > I
>> >> > installed v11.074 of the Intel compilers in /usr/local/intel, so my
>> >> > libimf.so file is located here:
>> >> >
>> >> > /usr/local/intel/Compiler/11.0/074/lib/intel64/libimf.so
>> >> >
>> >> > So I just add that to my LD_LIBRARY_PATH:
>> >> >
>> >> >
>> >> > LD_LIBRARY_PATH=/usr/local/intel/Compiler/11.0/074/lib/intel64:$LD_LIBRARY_PATH
>> >> > export LD_LIBRARY_PATH
>> >>
>> >> Just a clarification: With my system I use the latest intels version
>> >> 10, 10.1.2.024, and mkl 10.1.2.024 because it proved difficult to make
>> >> a debian package with version 11. At
>> >>
>> >> echo $LD_LIBRARY_PATH
>> >>
>> >>
>> >> /opt/intel/mkl/10.1.2.024/lib/em64t:/opt/intel/cce/10.1.022/lib:opt/intel/fce/10.1.022/lib:/usr/local/lib
>> >>
>> >> (that /lib contains libimf.so)
>> >>
>> >> That results from sourcing in my .bashrc:
>> >>
>> >> . /opt/intel/fce/10.1.022/bin/ifortvars.sh
>> >> . /opt/intel/cce/10.1.022/bin/iccvars.sh
>> >>
>> >>  Did you suppress that sourcing before exporting the LD_EXPORT_PATH to
>> >> the library at issue? Having so much turned around the proble, it is
>> >> not unlikely that I am messing myself.
>> >>
>> >> thanks
>> >> francesco
>> >>
>> >>
>> >> >
>> >> > Now I can run whatever programs need libimf.so without any problems.
>> >> > In
>> >> > your case, you'll want to that before your make command.
>> >> >
>> >> > Here's exactly what I use to compile OpenMPI with the Intel
>> >> > Compilers:
>> >> >
>> >> > export PATH=/usr/local/intel/Compiler/11.0/074/bin/intel64:$PATH
>> >> >
>> >> > export
>> >> >
>> >> > LD_LIBRARY_PATH=/usr/local/intel/Compiler/11.0/074/lib/intel64:$LD_LIBRARY_PATH
>> >> >
>> >> > ../configure CC=icc CXX=icpc F77=ifort FC=ifort
>> >> > --prefix=/usr/local/openmpi-1.2.8/intel-11/x86_64 --disable-ipv6
>> >> > --with-sge --with-openib --enable-static
>> >> >
>> >> > --
>> >> > Prentice
>> >> > _______________________________________________
>> >> > users mailing list
>> >> > users_at_[hidden]
>> >> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >> >
>> >> _______________________________________________
>> >> users mailing list
>> >> users_at_[hidden]
>> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden]
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>