Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OpenMPI 1.6.4 and Intel Composer_xe_2013.4.183: problem with remote runs, orted: error while loading shared libraries: libimf.so
From: Stefano Zaghi (stefano.zaghi_at_[hidden])
Date: 2013-06-21 14:05:37


Hi Gus,
thank you for your replay.

The strange path I have chosen is because this was only a test. However my
home dir is shared on all nodes and the lib dir is not a simple simlink. I
think that Thomas is right, I have to remove intel64 from Intel/lib path.
Monday I will try.

Thank you again.
Il giorno 21/giu/2013 17:55, "Gus Correa" <gus_at_[hidden]> ha
scritto:

> Hi Stefano
>
> Make sure your Intel compiler's shared libraries
> are accessible on all nodes.
>
> Is your /home directory shared across all nodes?
> How about /opt (if Intel is installed there)?
>
> By default Intel installs the compilers on /opt, which in typical
> clusters (and Linux distributions) is a local directory (to each node),
> not shared via NFS.
> Although you seem to have installed it somewhere else,
> /home/stefano/opt maybe, if /home/stefano/opt
> is just a soft link to /opt, not a real directory,
> that may not to do the trick across the cluster network.
>
> This error:
>
> >> /home/stefano/opt/mpi/openmpi/**1.6.4/intel/bin/orted: error
> >> while loading shared libraries: libimf.so: cannot open shared
> >> object file: No such file or directory
>
> suggests something like that is going on (libimf.so is an
> *Intel shared library*, it is *not an Open MPI libary*).
>
>
> To have all needed tools (OpenMPI and Intel)
> available on all nodes, there are two typical solutions
> (by the way, see this FAQ: http://www.open-mpi.org/faq/?**
> category=building#where-to-**install<http://www.open-mpi.org/faq/?category=building#where-to-install>
> ):
>
> 1) Install them on all nodes, via RPM, or configure/make/install, or other
> mechanism.
> This is time consuming and costly to maintain, but scales well
> in big or small clusters.
>
> 2) Install them on your master/head/adminsitration/**storage node,
> and and share them via network (typicaly via NFS export/mount).
> This is easy to maintain, and scales well in small/medium clusters,
> but not so much on big ones.
>
> Make sure the Intel and MPI directories are either shared by
> or present/installed on all nodes.
>
> I also wonder if you really need these many environment variables:
>
> >> LD_LIBRARY_PATH=${MPI}/lib/**openmpi:${MPI}/lib:$LD_**LIBRARY_PATH
> >> export LD_RUN_PATH=${MPI}/lib/**openmpi:${MPI}/lib:$LD_RUN_**PATH
>
> or if that may be actually replaced by the simpler form:
>
> >> LD_LIBRARY_PATH=${MPI}/lib:$**LD_LIBRARY_PATH
>
> I hope it helps,
> Gus Correa
>
>
>
> On 06/21/2013 04:35 AM, Stefano Zaghi wrote:
>
>> Wow... I think you are right... I will am check after the job I have
>> just started will finish.
>>
>> Thank you again.
>>
>> See you soon
>>
>> Stefano Zaghi
>> Ph.D. Aerospace Engineer,
>> Research Scientist, Dept. of Computational Hydrodynamics at *CNR-INSEAN*
>> <http://www.insean.cnr.it/en/**content/cnr-insean>
>> >
>> The Italian Ship Model Basin
>> (+39) 06.50299297 (Office)
>> My codes:
>> *OFF* <
https://github.com/szaghi/OFF**>, Open source Finite volumes Fluid
>> dynamics code
>> *Lib_VTK_IO* <https://github.com/szaghi/**Lib_VTK_IO>>,
>> a Fortran library
>> to write and read data conforming the VTK standard
>> *IR_Precision* <
https://github.com/szaghi/IR_**Precision>>,
>> a Fortran
>> (standard 2003) module to develop portable codes
>>
>>
>> 2013/6/21 <thomas.forde_at_[hidden] <mailto:thomas.forde_at_ulstein.**com<thomas.forde_at_[hidden]>
>> >>
>>
>> hi Stefano
>>
>> /home/stefano/opt/intel/2013.**4.183/lib/intel64/ is also the wrong
>> path, as the file is in ..183/lib/ and not ...183/lib/intel64/
>>
>> is that why?
>> ./Thomas
>>
>>
>> Den 21. juni 2013 kl. 10:26 skrev "Stefano Zaghi"
>> <stefano.zaghi_at_[hidden] <mailto:stefano.zaghi_at_gmail.**com<stefano.zaghi_at_[hidden]>
>> >>:
>>
>> Dear Thomas,
>>> thank you again.
>>>
>>> Symlink in /usr/lib64 is not enough, I have symlinked also
>>> in /home/stefano/opt/mpi/openmpi/**1.6.4/intel/lib and, as expected,
>>> not only libimf.so but also ibirng.so and libintlc.so.5 are
>>> necessary.
>>>
>>> Now also remote runs works, but this is only a workaround, I still
>>> not understand why mpirun do not find intel library even if
>>> LD_LIBRARY_PATH contains also
>>> /home/stefano/opt/intel/2013.**4.183/lib/intel64. Can you try
>>> explain again?
>>>
>>> Thank you very much.
>>>
>>> Stefano Zaghi
>>> Ph.D. Aerospace Engineer,
>>> Research Scientist, Dept. of Computational Hydrodynamics at
>>> *CNR-INSEAN* <
http://www.insean.cnr.it/en/**content/cnr-insean>
>>> >
>>> The Italian Ship Model Basin
>>> (+39) 06.50299297 (Office)
>>> My codes:
>>> *OFF* <
https://github.com/szaghi/OFF**>, Open source Finite volumes
>>> Fluid dynamics code
>>> *Lib_VTK_IO* <https://github.com/szaghi/**Lib_VTK_IO>>,
>>> a would
>>> Fortran library to write and read data conforming the VTK standard
>>> *IR_Precision* <
https://github.com/szaghi/IR_**Precision>>,
>>> a Fortran
>>> (standard 2003) module to develop portable codes
>>>
>>>
>>> 2013/6/21 <thomas.forde_at_[hidden] <mailto:thomas.forde_at_ulstein.**
>>> com <thomas.forde_at_[hidden]>>>
>>>
>>> your settings are as following:
>>> export MPI=/home/stefano/opt/mpi/**openmpi/1.6.4/intel
>>> export PATH=${MPI}/bin:$PATH
>>> export
>>> LD_LIBRARY_PATH=${MPI}/lib/**openmpi:${MPI}/lib:$LD_**
>>> LIBRARY_PATH
>>> export LD_RUN_PATH=${MPI}/lib/**openmpi:${MPI}/lib:$LD_RUN_**
>>> PATH
>>>
>>> and your path to libimf.so file is
>>> /home/stefano/opt/intel/2013.**4.183/lib/libimf.so
>>>
>>> your export LD_LIbrary_PATH if i can decude it right would be
>>> because you use the $MPI first.
>>>
>>> /home/stefano/opt/mpi/openmpi/**1.64./intel/lib/openmpi and
>>> /home/stefano/opt/mpi/openmpi/**1.64./intel/lib
>>>
>>> as you can see it doesnt look for the files int he right place.
>>>
>>> the simplest thing i would try is to symlink the libimf.so
>>> file to /usr/lib64 and should give you a workaround.
>>>
>>>
>>>
>>>
>>>
>>>
>>> From: Stefano Zaghi <stefano.zaghi_at_[hidden]
>>> <mailto:stefano.zaghi_at_gmail.**com <stefano.zaghi_at_[hidden]>>>
>>> To: Open MPI Users <users_at_[hidden]
>>> <mailto:users_at_[hidden]>>,
>>> Date: 21.06.2013 09:45
>>> Subject: Re: [OMPI users] OpenMPI 1.6.4 and Intel
>>> Composer_xe_2013.4.183: problem with remote runs, orted: error
>>> while loading shared libraries: libimf.so
>>> Sent by: users-bounces_at_[hidden]
>>> <mailto:users-bounces_at_open-**mpi.org<users-bounces_at_[hidden]>
>>> >
>>> ------------------------------**------------------------------**
>>> ------------
>>>
>>>
>>>
>>> Dear Thomas,
>>>
>>> thank you very much for your very fast replay.
>>>
>>> Yes I have that library in the correct place:
>>>
>>> -rwxr-xr-x 1 stefano users 3.0M May 20 14:22
>>> opt/intel/2013.4.183/lib/**intel64/libimf.so
>>>
>>> Stefano Zaghi
>>> Ph.D. Aerospace Engineer,
>>> Research Scientist, Dept. of Computational Hydrodynamics at
>>> *_CNR-INSEAN_* <
http://www.insean.cnr.it/en/**content/cnr-insean>
>>> >
>>> The Italian Ship Model Basin
>>> (+39) 06.50299297 (Office)
>>> My codes:
>>> *_OFF_* <
https://github.com/szaghi/OFF**>, Open source Finite
>>> volumes Fluid dynamics code
>>> *_Lib_VTK_IO_* <https://github.com/szaghi/**Lib_VTK_IO>>,
>>> a
>>> Fortran library to write and read data conforming the VTK
>>> standard
>>> *_IR_Precision_* <
https://github.com/szaghi/IR_**Precision>>,
>>> a
>>> Fortran (standard 2003) module to develop portable codes
>>>
>>>
>>> 2013/6/21 <_thomas.forde_at_[hidden]_
>>> <mailto:thomas.forde_at_ulstein.**com <thomas.forde_at_[hidden]>>>
>>> hi Stefano
>>>
>>> your error message show that you are missing a shared library,
>>> not necessary that library path is wrong.
>>>
>>> do you actually have libimf.so, can you find the file on your
>>> system.
>>>
>>> ./Thomas
>>>
>>>
>>>
>>>
>>> From: Stefano Zaghi <_stefano.zaghi_at_[hidden]_
>>> <mailto:stefano.zaghi_at_gmail.**com <stefano.zaghi_at_[hidden]>>>
>>> To: _users_at_[hidden]_ <mailto:users_at_[hidden]>,
>>> Date: 21.06.2013 09:27
>>> Subject: [OMPI users] OpenMPI 1.6.4 and Intel
>>> Composer_xe_2013.4.183: problem with remote runs, orted: error
>>> while loading shared libraries: libimf.so
>>> Sent by: _users-bounces_at_[hidden]_
>>> <mailto:users-bounces_at_open-**mpi.org<users-bounces_at_[hidden]>
>>> >
>>> ------------------------------**------------------------------**
>>> ------------
>>>
>>>
>>>
>>>
>>> Dear All,
>>> I have compiled OpenMPI 1.6.4 with Intel Composer_xe_2013.4.183.
>>>
>>> My configure is:
>>>
>>> ./configure --prefix=/home/stefano/opt/**mpi/openmpi/1.6.4/intel
>>> CC=icc CXX=icpc F77=ifort FC=ifort
>>>
>>> Intel Composer has been installed in:
>>>
>>> /home/stefano/opt/intel/2013.**4.183/composer_xe_2013.4.183
>>>
>>> Into the .bashrc and .profile in all nodes there is:
>>>
>>> source /home/stefano/opt/intel/2013.**4.183/bin/compilervars.sh
>>> intel64
>>> export MPI=/home/stefano/opt/mpi/**openmpi/1.6.4/intel
>>> export PATH=${MPI}/bin:$PATH
>>> export
>>> LD_LIBRARY_PATH=${MPI}/lib/**openmpi:${MPI}/lib:$LD_**
>>> LIBRARY_PATH
>>> export LD_RUN_PATH=${MPI}/lib/**openmpi:${MPI}/lib:$LD_RUN_**
>>> PATH
>>>
>>> If I run parallel job into each single node (e.g. mpirun -np 8
>>> myprog) all works well. However, when I tried to run parallel
>>> job in more nodes of the cluster (remote runs) like the
>>> following:
>>>
>>> mpirun -np 16 --bynode --machinefile nodi.txt -x
>>> LD_LIBRARY_PATH -x LD_RUN_PATH myprog
>>>
>>> I got the following error:
>>>
>>> /home/stefano/opt/mpi/openmpi/**1.6.4/intel/bin/orted: error
>>> while loading shared libraries: libimf.so: cannot open shared
>>> object file: No such file or directory
>>>
>>> I have read many FAQs and online resources, all indicating
>>> LD_LIBRARY_PATH as the possible problem (wrong setting).
>>> However I am not able to figure out what is going wrong, the
>>> LD_LIBRARY_PATH seems to set right in all nodes.
>>>
>>> It is worth noting that in the same cluster I have successful
>>> installed OpenMPI 1.4.3 with Intel Composer_xe_2011_sp1.6.233
>>> following exactly the same procedure.
>>>
>>> Thank you in advance for all suggestion,
>>> sincerely
>>>
>>> Stefano Zaghi
>>> Ph.D. Aerospace Engineer,
>>> Research Scientist, Dept. of Computational Hydrodynamics at
>>> *_CNR-INSEAN_* <
http://www.insean.cnr.it/en/**content/cnr-insean>
>>> >
>>> The Italian Ship Model Basin
>>> (+39) 06.50299297 (Office)
>>> My codes: _
>>> _*_OFF_* <
https://github.com/szaghi/OFF**>, Open source Finite
>>> volumes Fluid dynamics code _
>>> _*_Lib_VTK_IO_* <https://github.com/szaghi/**Lib_VTK_IO>>,
>>> a
>>> Fortran library to write and read data conforming the VTK
>>> standard
>>> *_IR_Precision_* <
https://github.com/szaghi/IR_**Precision>>,
>>> a
>>> Fortran (standard 2003) module to develop portable
>>> codes_________________________**______________________
>>> users mailing list_
>>> __users_at_[hidden]_ <mailto:users_at_[hidden]>_
>>> __
http://www.open-mpi.org/**mailman/listinfo.cgi/users__>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Denne e-posten kan innehalde informasjon som er konfidensiell
>>> og/eller underlagt lovbestemt teieplikt. Kun den tiltenkte
>>> adressat har adgang
>>> til å lese eller vidareformidle denne e-posten eller
>>> tilhøyrande vedlegg. Dersom De ikkje er den tiltenkte
>>> mottakar, vennligst kontakt avsendar pr e-post, slett denne
>>> e-posten med vedlegg og makuler samtlige utskrifter og kopiar
>>> av den.
>>>
>>> This e-mail may contain confidential information, or otherwise
>>> be protected against unauthorised use. Any disclosure,
>>> distribution or other use of the information by anyone but the
>>> intended recipient is strictly prohibited.
>>> If you have received this e-mail in error, please advise the
>>> sender by immediate reply and destroy the received documents
>>> and any copies hereof.
>>>
>>>
>>> PBefore
>>> printing, think about the environment
>>>
>>>
>>>
>>> ______________________________**_________________
>>> users mailing list_
>>> __users_at_[hidden]_ <mailto:users_at_[hidden]>_
>>> __
http://www.open-mpi.org/**mailman/listinfo.cgi/users__>
>>> ______________________________**_________________
>>> users mailing list
>>> users_at_[hidden] <mailto:users_at_[hidden]>
>>>
http://www.open-mpi.org/**mailman/listinfo.cgi/users>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Denne e-posten kan innehalde informasjon som er konfidensiell
>>> og/eller underlagt lovbestemt teieplikt. Kun den tiltenkte
>>> adressat har adgang
>>> til å lese eller vidareformidle denne e-posten eller
>>> tilhøyrande vedlegg. Dersom De ikkje er den tiltenkte
>>> mottakar, vennligst kontakt avsendar pr e-post, slett denne
>>> e-posten med vedlegg og makuler samtlige utskrifter og kopiar
>>> av den.
>>>
>>> This e-mail may contain confidential information, or otherwise
>>> be protected against unauthorised use. Any disclosure,
>>> distribution or other use of the information by anyone but the
>>> intended recipient is strictly prohibited.
>>> If you have received this e-mail in error, please advise the
>>> sender by immediate reply and destroy the received documents
>>> and any copies hereof.
>>>
>>>
>>> PBefore
>>> printing, think about the environment
>>>
>>>
>>>
>>>
>>> ______________________________**_________________
>>> users mailing list
>>> users_at_[hidden] <mailto:users_at_[hidden]>
>>>
http://www.open-mpi.org/**mailman/listinfo.cgi/users>
>>>
>>>
>>> ______________________________**_________________
>>> users mailing list
>>> users_at_[hidden] <mailto:users_at_[hidden]>htt**
>>> p://www.open-mpi.org/mailman/**listinfo.cgi/users<
http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>
>>
>>
>> Denne e-posten kan innehalde informasjon som er konfidensiell
>> og/eller underlagt lovbestemt teieplikt. Kun den tiltenkte adressat
>> har adgang til å lese eller vidareformidle denne e-posten eller
>> tilhøyrande vedlegg. Dersom De ikkje er den tiltenkte mottakar,
>> vennligst kontakt avsendar pr e-post, slett denne e-posten med
>> vedlegg og makuler samtlige utskrifter og kopiar av den.
>>
>> This e-mail may contain confidential information, or otherwise be
>> protected against unauthorised use. Any disclosure, distribution or
>> other use of the information by anyone but the intended recipient is
>> strictly prohibited. If you have received this e-mail in error,
>> please advise the sender by immediate reply and destroy the received
>> documents and any copies hereof.
>>
>> PBefore printing, think about the environment
>>
>>
>> ______________________________**_________________
>> users mailing list
>> users_at_[hidden] <mailto:users_at_[hidden]>
>> http://www.open-mpi.org/**mailman/listinfo.cgi/users>
>>
>>
>>
>>
>> ______________________________**_________________
>> users mailing list
>> users_at_[hidden]
>>
http://www.open-mpi.org/**mailman/listinfo.cgi/users>
>>
>
> ______________________________**_________________
> users mailing list
> users_at_[hidden]
>
http://www.open-mpi.org/**mailman/listinfo.cgi/users>
>