Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: David Gunter (dog_at_[hidden])
Date: 2006-04-10 11:07:25


Here are the results using mpicc:

(ffe-64 153%) mpicc -o send4 send4.c
/usr/bin/ld: skipping incompatible /net/scratch1/dog/flash64/openmpi/
openmpi-1.0.2-32b/lib/libmpi.so when searching for -lmpi
/usr/bin/ld: cannot find -lmpi
collect2: ld returned 1 exit status

(ffe-64 154%) mpicc -showme
gcc -I/net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/include -I/
net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/include/openmpi/
ompi -pthread -L/net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/
lib -lmpi -lorte -lopal -lutil -lnsl -ldl -Wl,--export-dynamic -lutil
-lnsl -lm -ldl

Thus is seems the -m32 flag did not make it into mpicc. If I put it
in and compile, things compile without error:

(ffe-64 155%) gcc -m32 -I/net/scratch1/dog/flash64/openmpi/
openmpi-1.0.2-32b/include -I/net/scratch1/dog/flash64/openmpi/
openmpi-1.0.2-32b/include/openmpi/ompi -o send4 send4.c -pthread -L/
net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib -lmpi -lorte -
lopal -lutil -lnsl -ldl -Wl,--export-dynamic -lutil -lnsl -lm -ldl
(ffe-64 166%)

Next, I do an llogin to get some nodes and try to run:

(flashc 105%) mpiexec -n 4 ./send4
[flashc.lanl.gov:09921] mca: base: component_find: unable to open: /
lib/libc.so.6: version `GLIBC_2.3.4' not found (required by /net/
scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/openmpi/
mca_paffinity_linux.so) (ignored)
[flashc.lanl.gov:09921] mca: base: component_find: unable to open:
libbproc.so.4: cannot open shared object file: No such file or
directory (ignored)
[flashc.lanl.gov:09921] mca: base: component_find: unable to open:
libbproc.so.4: cannot open shared object file: No such file or
directory (ignored)
[flashc.lanl.gov:09921] mca: base: component_find: unable to open:
libbproc.so.4: cannot open shared object file: No such file or
directory (ignored)
[flashc.lanl.gov:09921] mca: base: component_find: unable to open:
libbproc.so.4: cannot open shared object file: No such file or
directory (ignored)
[flashc.lanl.gov:09921] mca: base: component_find: unable to open:
libbproc.so.4: cannot open shared object file: No such file or
directory (ignored)
mpiexec: relocation error: /net/scratch1/dog/flash64/openmpi/
openmpi-1.0.2-32b/lib/openmpi/mca_soh_bproc.so: undefined symbol:
bproc_nodelist

The problem now looks like /lib/libc.so.6 is not longer available.
Indeed, it is available on the compiler nodes but it cannot be found
on the backend nodes - whoops!

The other problem is that the -m32 flag didn't make it into mpicc for
some reason.

-david

On Apr 10, 2006, at 8:41 AM, Brian Barrett wrote:

> For Linux, this isn't too big of a problem, but you might want to
> take a look at the output of "mpicc -showme" to get an idea of what
> compiler flags / libraries would be added if you used the wrapper
> compilers. I think for Linux the only one that might at all matter
> is -pthread.
>
> But I didn't mention it before because I'm pretty sure that's not the
> issue...
>
> Brian
>
>
> On Apr 10, 2006, at 10:24 AM, David Gunter wrote:
>
>> The problem with doing it that way is that is disallows our in-hose
>> code teams from using their compilers of choice. Prior to open-mpi
>> we have been using LA-MPI. LA-MPI has always been compiled in such
>> a way that it wouldn't matter what other compilers were used to
>> build mpi applications provided the necessary include and lib files
>> were linked in the compilations.
>>
>> -david
>>
>> On Apr 10, 2006, at 8:00 AM, Ralph Castain wrote:
>>
>>> I'm not an expert on the configure system, but one thing jumps out
>>> at me immediately - you used "gcc" to compile your program. You
>>> really need to use "mpicc" to do so.
>>>
>>> I think that might be the source of your errors.
>>>
>>> Ralph
>>>
>>>
>>> David Gunter wrote:
>>>> After much fiddling around, I managed to create a version of open-
>>>> mpi that would actually build. Unfortunately, I can't run the
>>>> simplest of applications with it. Here's the setup I used: export
>>>> CC=gcc export CXX=g++ export FC=gfortran export F77=gfortran
>>>> export CFLAGS="-m32" export CXXFLAGS="-m32" export FFLAGS="-m32"
>>>> export FCFLAGS="-m32" export LDFLAGS="-L/usr/lib" ./configure --
>>>> prefix=/net/scratch1/dog/flash64/openmpi/ openmpi-1.0.2-32b --
>>>> build=i686-pc-linux-gnu --with-bproc --with-g m --enable-io-romio
>>>> --with-romio --with-io-romio-flags='--build=i686- pc-linux-gnu'
>>>> Configure completes, as does 'make' and then 'make install'. Next
>>>> I tried to compile a simple MPI_Send test application, which
>>>> fails to run: (flashc 104%) gcc -m32 -I/net/scratch1/dog/flash64/
>>>> openmpi/ openmpi-1.0.2-32b/include -o send4 send4.c -L/net/
>>>> scratch1/dog/ flash64/openmpi/openmpi-1.0.2-32b/lib -lmpi /net/
>>>> scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/libopal.so.0:
>>>> warning: epoll_wait is not implemented and will always fail /net/
>>>> scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/libopal.so.0:
>>>> warning: epoll_ctl is not implemented and will always fail /net/
>>>> scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/libopal.so.0:
>>>> warning: epoll_create is not implemented and will always fail
>>>> (flashc 105%) which mpiexec /net/scratch1/dog/flash64/openmpi/
>>>> openmpi-1.0.2-32b/bin/mpiexec (flashc 106%) mpiexec -n 4 ./send4
>>>> [flashc.lanl.gov:32373] mca: base: component_find: unable to
>>>> open: / lib/libc.so.6: version `GLIBC_2.3.4' not found (required
>>>> by /net/ scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/
>>>> openmpi/ mca_paffinity_linux.so) (ignored) [flashc.lanl.gov:
>>>> 32373] mca: base: component_find: unable to open: libbproc.so.4:
>>>> cannot open shared object file: No such file or directory
>>>> (ignored) [flashc.lanl.gov:32373] mca: base: component_find:
>>>> unable to open: libbproc.so.4: cannot open shared object file: No
>>>> such file or directory (ignored) [flashc.lanl.gov:32373] mca:
>>>> base: component_find: unable to open: libbproc.so.4: cannot open
>>>> shared object file: No such file or directory (ignored)
>>>> [flashc.lanl.gov:32373] mca: base: component_find: unable to
>>>> open: libbproc.so.4: cannot open shared object file: No such file
>>>> or directory (ignored) [flashc.lanl.gov:32373] mca: base:
>>>> component_find: unable to open: libbproc.so.4: cannot open shared
>>>> object file: No such file or directory (ignored) mpiexec:
>>>> relocation error: /net/scratch1/dog/flash64/openmpi/
>>>> openmpi-1.0.2-32b/lib/openmpi/mca_soh_bproc.so: undefined symbol:
>>>> bproc_nodelist I'm still open to suggestions. -david On Apr 10,
>>>> 2006, at 7:11 AM, David R. (Chip) Kent IV wrote:
>>>>> When running the tests, is the LD_LIBRARY_PATH getting set to
>>>>> lib64 instead of lib or something like that? Chip On Sat, Apr
>>>>> 08, 2006 at 02:45:01AM -0600, David Gunter wrote:
>>>>>> I am trying to build a 32-bit compatible OpenMPI for our 64-bit
>>>>>> Bproc Opteron systems. I saw the thread from last August-
>>>>>> September 2005 regarding this but didn't see where it ever
>>>>>> succeeded or if any of the problems had been fixed. Most
>>>>>> importantly, romio is required to work as well. Is this
>>>>>> possible and how is it done? Here's what I have tried so far:
>>>>>> setenv CFLAGS -m32 setenv CXXFLAGS -m32 setenv FFLAGS -m32
>>>>>> setenv F90FLAGS -m32 I have used the '--build=i686-pc-linux-
>>>>>> gnu' option to the configure setup as well as --with-io-romio-
>>>>>> flags="--build=i686-pc-linux-gnu" configure halts with errors
>>>>>> when trying to run the Fortran 77 tests. If I remove those env
>>>>>> settings and just use the --build option, configure will
>>>>>> proceed to the end but the make will eventually halt with
>>>>>> errors due to a mix of lib64 libs being accessed at some point.
>>>>>> Any ideas? -david -- David Gunter CCN-8: HPC Environments:
>>>>>> Parallel Tools Team Los Alamos National Laboratory
>>>>>> ------------------------------------------------------------
>>>>>> listmanager [ptools_team] Options: To:
>>>>>> listmanager_at_[hidden] Body: ptools_team
>>>>>> ------------------------------------------------------------
>>>>> -- ----------------------------------------------------- David
>>>>> R. "Chip" Kent IV Parallel Tools Team High Performance
>>>>> Computing Environments Group (CCN-8) Los Alamos National
>>>>> Laboratory (505)665-5021 drkent_at_[hidden]
>>>>> ----------------------------------------------------- This
>>>>> message is "Technical data or Software Publicly Available" or
>>>>> "Correspondence".
>>>> _______________________________________________ users mailing
>>>> list users_at_[hidden] http://www.open-mpi.org/mailman/
>>>> listinfo.cgi/users
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users