Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] 1.7rc8 is posted
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2013-02-27 20:31:54

On Feb 27, 2013, at 7:36 PM, Pavel Mezentsev <pavel.mezentsev_at_[hidden]> wrote:

> I've tried the new rc. Here is what I got:

Thanks for testing.

> 1) I've successfully built it with intel-13.1 and gcc-4.7.2. But I've failed while using open64-4.5.2 and ekopath-5.0.0 (pathscale). The problems are in the fortran part. In each case I've used the following configuration line:
> CC=$CC CXX=$CXX F77=$F77 FC=$FC ./configure --prefix=$prefix --with-knem=$knem_path
> Open64 failed during configuration with the following:
> *** Fortran compiler
> checking whether we are using the GNU Fortran compiler... yes
> checking whether openf95 accepts -g... yes
> configure: WARNING: Open MPI now ignores the F77 and FFLAGS environment variables; only the FC and FCFLAGS environment variables are used.
> checking whether ln -s works... yes
> checking if Fortran compiler works... yes
> checking for extra arguments to build a shared library... none needed
> checking for Fortran flag to compile .f files... none
> checking for Fortran flag to compile .f90 files... none
> checking to see if Fortran compilers need additional linker flags... none
> checking external symbol convention... double underscore
> checking if C and Fortran are link compatible... yes
> checking to see if Fortran compiler likes the C++ exception flags... skipped (no C++ exceptions flags)
> checking to see if mpifort compiler needs additional linker flags... none
> checking if Fortran compiler supports CHARACTER... yes
> checking size of Fortran CHARACTER... 1
> checking for C type corresponding to CHARACTER... char
> checking alignment of Fortran CHARACTER... 1
> checking for corresponding KIND value of CHARACTER... C_SIGNED_CHAR
> checking KIND value of Fortran C_SIGNED_CHAR... no ISO_C_BINDING -- fallback
> checking Fortran value of selected_int_kind(4)... no
> configure: WARNING: Could not determine KIND value of C_SIGNED_CHAR
> configure: WARNING: See config.log for more details
> configure: error: Cannot continue

Please send the full configure output as well as the config.log file (please compress).

> Ekopath failed during make with the following error:
> PPFC mpi-f08-sizeof.lo
> PPFC mpi-f08.lo
> In file included from mpi-f08.F90:37:
> mpi-f-interfaces-bind.h:1908: warning: extra tokens at end of #endif directive
> mpi-f-interfaces-bind.h:2957: warning: extra tokens at end of #endif directive
> In file included from mpi-f08.F90:38:
> pmpi-f-interfaces-bind.h:1911: warning: extra tokens at end of #endif directive
> pmpi-f-interfaces-bind.h:2963: warning: extra tokens at end of #endif directive
> pathf95-1044 pathf95: INTERNAL OMPI_OP_CREATE_F, File = mpi-f-interfaces-bind.h, Line = 955, Column = 29
> Internal : Unexpected ATP_PGM_UNIT in check_interoperable_pgm_unit()

I've pinged Pathscale about this.

> 2) I've ran a couple of tests (IMB) with the new version. I ran this on a system consisting of 10 nodes with Intel SB processor and fdr ConnectX3 infiniband adapters.
> First I've tried the following parameters:
> mpirun -np $NP -hostfile hosts --mca btl openib,sm,self --bind-to-core -npernode 16 --mca mpi_leave_pinned 1 ./IMB-MPI1 -npmin $NP -mem 4G $COLL
> This combination complained about mca_leave_pinned. The same line works for 1.6.3. Is something different in the new release and I've missed it?
> --------------------------------------------------------------------------
> A process attempted to use the "leave pinned" MPI feature, but no
> memory registration hooks were found on the system at run time. This
> may be the result of running on a system that does not support memory
> hooks or having some other software subvert Open MPI's use of the
> memory hooks. You can disable Open MPI's use of memory hooks by
> setting both the mpi_leave_pinned and mpi_leave_pinned_pipeline MCA
> parameters to 0.

This should not be, and might explain your lower performance on the IMB results.

Nathan -- you reported that you saw something like this before, but were then unable to reproduce. Any ideas what's going on here? Mellanox?

(although the short message latency is troubling...)

Can you ensure that you aren't using MXM in 1.7? I understand that its short message latency is worse than RC verbs. You'll need to add "--mca pml ob1" in your command line.

Jeff Squyres
For corporate legal information go to: