On Dec 20, 2006, at 1:54 PM, Andrus, Mr. Brian ((Contractor)) wrote:
> Thanks for the info.
> I downloaded the newer stable (1.1.2-1) and have tried it with the
> Since I am trying to use the rpm source, everything comes out in one
> output file.
> I have compressed and attached it.
The problem appears to be that configure is finding g77 for your
Fortran compiler and pgf90 for your F90 compiler. Since they're not
link compatible, it takes the most conservative approach and goes
ahead and compiles f77 support but disables f90 support. Hence, the
messages you see from mpif90.
I think the problem is how you're invoking rpmbuild, specifically the
--define parameter. Try the following (not the spacing an quoting):
shell$ rpmbuild --rebuild --define "configure_options CC=pgcc \
CXX=pgCC F77=pgf77 FC=pgf90 FFLAGS=-fastsse FCFLAGS=-fastsse" \
(I artificially wrapped with \ characters so that the mail wouldn't
Specifically, the token "configure_options" needs to appear in the
same command line argument as its value (it's a weird RPM-ism). So
the whole string needs to be quoted together, and have a single space
between the token "configure_options" and the value ("CC=pgcc
> Brian Andrus
> QSS Group, Inc.
> Naval Research Laboratory
> Monterey, California
> Desk: 831-656-4839
> -----Original Message-----
> From: users-bounces_at_[hidden] [mailto:users-bounces_at_open-
> mpi.org] On
> Behalf Of Jeff Squyres
> Sent: Wednesday, December 20, 2006 9:48 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] How do I compile OpenMPI with PGI compilers
> andF90 support?
> Can you send the full output from configure and config.log? See this
> page for details of what we need for compile failures:
> Also note that there is a slightly newer version than what you're
> -- v1.1.2 (1.1.3 may actually be out shortly, too).
> Note that our servers will be offline several hours tomorrow
> morning for
> planned maintence (it's that time of year), so be sure to look on the
> web site today or after tomorrow morning.
> On Dec 20, 2006, at 12:05 PM, Andrus, Mr. Brian ((Contractor)) wrote:
>> I am trying to build an OpenMPI rpm for RHEL4U4 using the following:
>> rpmbuild --rebuild --define configure_options"CC=pgcc CXX=pgCC
>> FC=pgf90 FFLAGS=-fastsse FCFLAGS=-fastsse" ./openmpi-1.1.1-1.src.rpm
>> It builds the rpm but there are some warnings:
>> configure: WARNING: -fno-strict-aliasing has been added to CFLAGS
>> configure: WARNING: -finline-functions has been added to CXXFLAGS
>> configure: WARNING: *** Fortran 77 and Fortran 90 compilers are not
>> link compatible
>> configure: WARNING: *** Disabling MPI Fortran 90/95 bindings
>> configure: WARNING: Unknown architecture ... proceeding anyway
>> configure: WARNING: File locks may not work with NFS. See the
>> Installation and users manual for instructions on testing and if
>> necessary fixing this
>> And when I try to compile a simple hello world fortran program:
>> [root_at_system ~]# mpif90 hello.f
>> Unfortunately, this installation of Open MPI was not compiled with
>> Fortran 90 support. As such, the mpif90 compiler is non-functional.
>> I have PGI v6.1 compilers installed at /usr/pgi/linux86-64/6.1/
>> Brian Andrus
>> QSS Group, Inc.
>> Naval Research Laboratory
>> Monterey, California
>> Desk: 831-656-4839
>> -----Original Message-----
>> From: users-bounces_at_[hidden] [mailto:users-bounces_at_open- mpi.org]
>> On Behalf Of Renato Golin
>> Sent: Wednesday, December 20, 2006 7:48 AM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] Suggestions needed for parallelisation of
>> sortingalgorithms (quicksort)
>> On 12/20/06, Harakiri <harakiri_23_at_[hidden]> wrote:
>>> I will study through the suggested paper, however i actually read a
>>> different paper which suggested using less messages, i would imagine
>>> that for arrays of numbers lets say 100 Millions - the network
>>> messages become the critical factor.
>> It depends completely on your network topology and technology (ie.
>> bandwidth and latency). It's very hard to predict a generic behaviour
>> other than: "more data is worse".
>> Ethernet is quite good at bandwidth but not at latency so a few big
>> chunks are better than lots of small chunks but it also depends how
>> the network is carrying your packages along the way.
>> The network is a critical factor only if it's running time is
>> comparable or greater than the processing time. Copying 1Mb between
>> nodes is critical for a nanosecond computation but not if it'll take
>> Reclaim your digital rights, eliminate DRM, learn more at
>> users mailing list
>> users mailing list
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
> users mailing list
> users mailing list
Server Virtualization Business Unit