Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Cannot build OpenMPI 1.3 with PGI pgf90 and Gnu gcc/g++.
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-03-30 22:35:26


I can replicate your error; this looks like a Libtool bug.

Open MPI does specifically test each of the C, C++, F77, and F90
compilers for the -pthread flag (and others). When mixing gcc/g++ and
pgf77/pgf90, OMPI's configure script correctly determines that gcc/g++
support -pthread, but pgf77/pgf90 do not.

When building the F90 library, OMPI issues the following command:

/bin/sh ../../../libtool --mode=link pgf90 -I../../../ompi/include -
I../../../ompi/include -I. -I. -I../../../ompi/mpi/f90 -export-
dynamic -o libmpi_f90.la -rpath /home/jsquyres/bogus/lib mpi.lo
mpi_sizeof.lo mpi_comm_spawn_multiple_f90.lo mpi_testall_f90.lo
mpi_testsome_f90.lo mpi_waitall_f90.lo mpi_waitsome_f90.lo
mpi_wtick_f90.lo mpi_wtime_f90.lo ../../../ompi/libmpi.la -lnsl -
lutil -lm

Notice the lack of -pthread in there. Libtool translates this into:

pgf90 -shared -fpic -Mnomain .libs/mpi.o .libs/mpi_sizeof.o .libs/
mpi_comm_spawn_multiple_f90.o .libs/mpi_testall_f90.o .libs/
mpi_testsome_f90.o .libs/mpi_waitall_f90.o .libs/
mpi_waitsome_f90.o .libs/mpi_wtick_f90.o .libs/mpi_wtime_f90.o -Wl,-
rpath -Wl,/users/jsquyres/svn/ompi/ompi/.libs -Wl,-rpath -Wl,/users/
jsquyres/svn/ompi/orte/.libs -Wl,-rpath -Wl,/users/jsquyres/svn/ompi/
opal/.libs -Wl,-rpath -Wl,/home/jsquyres/bogus/lib -L/users/jsquyres/
svn/ompi/orte/.libs -L/users/jsquyres/svn/ompi/opal/.libs ../../../
ompi/.libs/libmpi.so /users/jsquyres/svn/ompi/orte/.libs/libopen-
rte.so /users/jsquyres/svn/ompi/opal/.libs/libopen-pal.so -ldl -lnsl -
lutil -lm -pthread -Wl,-soname -Wl,libmpi_f90.so.0 -o .libs/
libmpi_f90.so.0.0.0

Note the addition of -pthread, which then causes the problem. I
*suspect* that this is because the f90 library is linking against
libmpi, libopen-rte, and libopen-pal (OMPI internal libraries) that
were built with -pthread (i.e., Libtool picks up these flags
automatically). This should probably be reported to the Libtool
developers, but I'm not 100% sure they can fix it -- I believe that
they assume that the linker flags used for one language can be used in
any language (compiler/linker).

I can think of two workarounds for you (one significantly less icky
than the other):

1. Use pgcc, pgCC, pgf77, and pgf90 to build Open MPI. If you have no
C++ MPI code, the resulting Open MPI build *should* be compatible with
your C + Fortran code.

2. Instead of using the "real" pgf77/pgf90, put pgf77/pgf90 scripts
early in your PATH that simply strip out -pthread from the argv and
then invoke the real/underlying pgf77/pgf90 compilers. This is pretty
icky, but it should work...

On Mar 30, 2009, at 11:21 AM, Gus Correa wrote:

> Hi Jeff, list
>
> Jeff: Thank you for getting back to me.
>
>
> 1) MPI-F90 features
>
> I most likely need the F90 bindings.
>
> The majority of the climate/ocean/atmosphere programs
> are written in F90.
> I've been using mpif90 to build these codes for a while
> (with OpenMPI and MPICH2).
> These are mostly "community" codes from NCAR, national labs, etc,
> and some code written in house.
> I would have to check these (big) codes in detail to see
> if any MPI-F90 features
> (e.g. mpi type checking) are really used,
> or if they are just doing MPI-F77 old-style calls
> (in which case I might be able
> to get away with the mpif77 wrapper built on top of pgf90, right?).
>
> However, if not yet used, sooner or later somebody will
> write a module relying on MPI-F90 features,
> hence, it would be better to build mpif90.
>
> **
>
> 2) Configure scripts
>
> I tried to build two ways.
> With machine-specific optimization flags, as on the script below,
> and without any optimization (other than what OpenMPI sets
> internally).
> Both builds fail the same way, at the same point,
> as I described before.
>
> Here is the "optimized" script:
>
> (The "on-optimized" just comments out
> the commands to export of CFLAGS, CXXFLAGS, FFLAGS, and FCFLAGS.)
>
> #! /bin/sh
> export MYINSTALLDIR=/somehwere/in/the/file/system
> ####################################################
> export CC=gcc
> export CXX=g++
> export F77=pgf90
> export FC=${F77}
> # Note: Optimization flags for AMD Opteron "Shanghai"
> export CFLAGS='-march=amdfam10 -O3 -finline-functions -funroll-loops
> -mfpmath=sse'
> export CXXFLAGS=${CFLAGS}
> export FFLAGS='-tp shanghai-64 -fast -Mfprelaxed'
> export FCFLAGS=${FFLAGS}
> ####################################################
> ../configure \
> --prefix=${MYINSTALLDIR} \
> --with-libnuma=/usr/lib64 \
> --with-tm=/usr/lib64 \
> --with-openib=/usr/lib64 \
> --enable-static \
> 2>&1 | tee configure.log
>
> **
>
> 3) "-pthread" flag when building libmpi_f90.so.0.0.0
>
> I am confused by why the "-phtread" flag, which was absent
> when building libmpi_f90.so.0.0.0 on 1.2.8 (successful build),
> appears on the same spot on 1.3, causing the build to fail.
> The build scripts are the same, the computer is the same,
> etc, only the OpenMPI release varies.
>
> Is there a way around?
> E.g., not using pthreads there,
> if not essential, or perhaps helping PGI to find the library
> and link to it, if it is indeed required there.
>
> **
>
> Thank you again,
> Gus Correa
> ---------------------------------------------------------------------
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> ---------------------------------------------------------------------
>
> Jeff Squyres wrote:
> > Sorry for the delay in replying.
> >
> > Can you send your exact configure command line?
> >
> > Also, do you need the F90 MPI bindings? If not, you can disable
> them
> > with the following:
> >
> > --disable-mpi-f90
> >
> >
> > On Mar 27, 2009, at 9:50 AM, Gus Correa wrote:
> >
> >> Dear OpenMPI pros.
> >>
> >> I've got no answer, so let me try again.
> >>
> >> I can't build OpenMPI 1.3 with a hybrid pgf90+gcc/g++ compiler set.
> >> However, OpenMPI 1.2.8 builds correctly with the same compilers,
> >> on the same computer (Linux x86_64 cluster), and same environment.
> >> See details in my original message below.
> >>
> >> The OpenMPI 1.3 build fails due to the (gcc) "-pthread" flag being
> >> rejected by pgf90 during the libtool
> >> link phase of libmpi_f90.so.0.0.0.
> >> Since this flag was not present on the same spot on OMPI 1.2.8,
> >> I wonder if the "-pthread" flag is really needed at that point,
> >> or if inadvertently sneaked in the
> >> OMPI 1.3 Makefiles and configure script.
> >>
> >> These hybrid compiler builds of MPI often mean the difference
> >> between being able to compile and run the very large
> >> climate/ocean/atmosphere codes
> >> which are at the core of our research mission here.
> >> To my knowledge, this is not a unique situation,
> >> and other people in our research field also need and use these
> >> libraries built on "Gnu+commercial Fortran" compilers.
> >> For this reason I keep a variety of OpenMPI, MPICH2, MVAPICH2
> >> builds, and I try to stay current with the newest releases.
> >>
> >> Any help is much appreciated.
> >>
> >> Thank you,
> >> Gus Correa
> >>
> ---------------------------------------------------------------------
> >> Gustavo Correa
> >> Lamont-Doherty Earth Observatory - Columbia University
> >> Palisades, NY, 10964-8000 - USA
> >>
> ---------------------------------------------------------------------
> >>
> >>
> >> Gus Correa wrote:
> >> > Dear OpenMPI experts
> >> >
> >> > Against all odds and the OpenMPI developer's and FAQ
> recommendation,
> >> > I've been building hybrid OpenMPI libraries using Gnu
> >> > gcc/g++ and Fortran compilers from PGI and from Intel.
> >> > One reason for this is that some climate/oceans/atmosphere
> >> > code we use compiles and runs with less hassle this way.
> >> >
> >> > (I also build "thoroughbred" Gnu/gfortran, PGI, and
> >> > Intel libraries.)
> >> >
> >> > Anyway, all was fine up to OpenMPI 1.2.8, of which I have
> functional
> >> > Gnu(C/C++)+PGI(F77/F90) and Gnu(C/C++)+Intel(F77/F90) libraries.
> >> >
> >> > However, when I tried to compile the Gnu(C/C++)+PGI(F77/F90).
> >> > version of OpenMPI 1.3 (I haven't got to 1.3.1 yet),
> >> > I've got an error during the make phase (see snippet below).
> >> > The error seems to be caused by the insertion of the "-pthread"
> >> > compiler flag on the build of libmpi_f90.so.0.0.0.
> >> >
> >> > Some change in the configure script may perhaps
> >> > have allowed this extra flag to sneak in?
> >> > The flag was not present on the same spot in the OpenMPI 1.2.8
> build,
> >> > as I checked in the make log of 1.2.8.
> >> > It is a Gnu/gcc flag, not recognized by PGI/pgf90.
> >> >
> >> > For now I can live with 1.2.8, but I wonder if this problem can
> >> > be fixed somehow, so that I can stay up to date with the
> >> > OpenMPI releases.
> >> >
> >> > More info:
> >> > (The same configuration was used for both OpenMPI 1.3 and 1.2.8.)
> >> >
> >> > 1. AMD Opteron Shanghai (dual socket, quad core)
> >> > 2. Linux kernel 2.6.18-92.1.22.el5 #1 SMP (CentOS 5.2)
> >> > 3. PGI 8.0.4
> >> > 4. Gnu/GCC 4.1.2
> >> >
> >> > Error message from "make":
> >> >
> >> > libtool: link: pgf90 -shared -fpic -Mnomain .libs/mpi.o
> >> > .libs/mpi_sizeof.o .libs/mpi_comm_spawn_multiple_f90.o
> >> > .libs/mpi_testall_f90.o .libs/mpi_testsome_f90.o
> >> .libs/mpi_waitall_f90.o
> >> > .libs/mpi_waitsome_f90.o .libs/mpi_wtick_f90.o .libs/
> mpi_wtime_f90.o
> >> > -Wl,-rpath
> >> >
> >> -Wl,/home/swinst/openmpi/1.3/openmpi-1.3/
> build_gnu-4.1.2_pgi-8.0-4/ompi/.libs
> >>
> >> > -Wl,-rpath
> >> >
> >> -Wl,/home/swinst/openmpi/1.3/openmpi-1.3/
> build_gnu-4.1.2_pgi-8.0-4/orte/.libs
> >>
> >> > -Wl,-rpath -Wl,/usr/lib64 -Wl,-rpath
> >> >
> >> -Wl,/home/swinst/openmpi/1.3/openmpi-1.3/
> build_gnu-4.1.2_pgi-8.0-4/opal/.libs
> >>
> >> > -Wl,-rpath -Wl,/home/sw/openmpi/openmpi-1.3-gnu-4.1.2-pgi-8.0-4/
> lib
> >> > -Wl,-rpath -Wl,/usr/lib64
> >> >
> >> -L/home/swinst/openmpi/1.3/openmpi-1.3/build_gnu-4.1.2_pgi-8.0-4/
> orte/.libs
> >>
> >> >
> >> -L/home/swinst/openmpi/1.3/openmpi-1.3/build_gnu-4.1.2_pgi-8.0-4/
> opal/.libs
> >>
> >> > ../../../ompi/.libs/libmpi.so -L/usr/lib64/lib -L/usr/lib64 -
> lrdmacm
> >> > -libverbs
> >> >
> >> /home/swinst/openmpi/1.3/openmpi-1.3/build_gnu-4.1.2_pgi-8.0-4/
> orte/.libs/libopen-rte.so
> >>
> >> > /usr/lib64/libtorque.so
> >> >
> >> /home/swinst/openmpi/1.3/openmpi-1.3/build_gnu-4.1.2_pgi-8.0-4/
> opal/.libs/libopen-pal.so
> >>
> >> > -lnuma -ldl -lnsl -lutil -lm -pthread -Wl,-soname
> >> -Wl,libmpi_f90.so.0
> >> > -o .libs/libmpi_f90.so.0.0.0
> >> > pgf90-Error-Unknown switch: -pthread
> >> > make[4]: *** [libmpi_f90.la] Error 1
> >> >
> >> >
> >> > Thank you,
> >> > Gus Correa
> >> >
> ---------------------------------------------------------------------
> >> > Gustavo Correa
> >> > Lamont-Doherty Earth Observatory - Columbia University
> >> > Palisades, NY, 10964-8000 - USA
> >> >
> ---------------------------------------------------------------------
> >> >
> >> >
> >> > _______________________________________________
> >> > users mailing list
> >> > users_at_[hidden]
> >> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
Cisco Systems