Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] 1.7rc8 is posted
From: Pavel Mezentsev (pavel.mezentsev_at_[hidden])
Date: 2013-02-28 12:04:48


Do you mean the logs from failed attempts? They are enclosed. If you need
the successful logs I'll need to make them again since the files from
successful builds are deleted.

I am not using MXM. The results with the option you suggested are the same
as before:
#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
       #bytes #repetitions t[usec] Mbytes/sec
            0 1000 1.49 0.00
            1 1000 1.58 0.61
            2 1000 1.12 1.71
            4 1000 1.10 3.48
            8 1000 1.11 6.90
           16 1000 1.11 13.69
           32 1000 1.12 27.21
           64 1000 1.16 52.52
          128 1000 1.72 70.83
          256 1000 1.84 132.72
          512 1000 1.99 245.74
         1024 1000 2.25 433.92
         2048 1000 2.87 680.54
         4096 1000 3.52 1109.13
         8192 1000 4.68 1670.60
        16384 1000 9.66 1617.91
        32768 1000 14.30 2185.24
        65536 640 23.45 2665.33
       131072 320 35.99 3473.15
       262144 160 58.05 4306.65
       524288 80 101.94 4904.69
      1048576 40 188.65 5300.86
      2097152 20 526.05 3801.94
      4194304 10 1096.09 3649.32

#---------------------------------------------------
# Benchmarking PingPing
# #processes = 2
#---------------------------------------------------
       #bytes #repetitions t[usec] Mbytes/sec
            0 1000 1.10 0.00
            1 1000 1.24 0.77
            2 1000 1.23 1.55
            4 1000 1.23 3.10
            8 1000 1.25 6.09
           16 1000 1.14 13.41
           32 1000 1.11 27.40
           64 1000 1.16 52.75
          128 1000 1.71 71.34
          256 1000 1.84 132.33
          512 1000 1.98 246.63
         1024 1000 2.27 429.26
         2048 1000 2.91 672.30
         4096 1000 3.52 1109.43
         8192 1000 4.80 1627.25
        16384 1000 9.98 1565.64
        32768 1000 14.70 2125.14
        65536 640 24.18 2584.97
       131072 320 37.33 3348.95
       262144 160 60.59 4125.82
       524288 80 105.83 4724.78
      1048576 40 197.82 5055.05
      2097152 20 791.35 2527.34
      4194304 10 1820.30 2197.44

Regards, Pavel Mezentsev.

2013/2/28 Jeff Squyres (jsquyres) <jsquyres_at_[hidden]>

> On Feb 27, 2013, at 7:36 PM, Pavel Mezentsev <pavel.mezentsev_at_[hidden]>
> wrote:
>
> > I've tried the new rc. Here is what I got:
>
> Thanks for testing.
>
> > 1) I've successfully built it with intel-13.1 and gcc-4.7.2. But I've
> failed while using open64-4.5.2 and ekopath-5.0.0 (pathscale). The problems
> are in the fortran part. In each case I've used the following configuration
> line:
> > CC=$CC CXX=$CXX F77=$F77 FC=$FC ./configure --prefix=$prefix
> --with-knem=$knem_path
> > Open64 failed during configuration with the following:
> > *** Fortran compiler
> > checking whether we are using the GNU Fortran compiler... yes
> > checking whether openf95 accepts -g... yes
> > configure: WARNING: Open MPI now ignores the F77 and FFLAGS environment
> variables; only the FC and FCFLAGS environment variables are used.
> > checking whether ln -s works... yes
> > checking if Fortran compiler works... yes
> > checking for extra arguments to build a shared library... none needed
> > checking for Fortran flag to compile .f files... none
> > checking for Fortran flag to compile .f90 files... none
> > checking to see if Fortran compilers need additional linker flags... none
> > checking external symbol convention... double underscore
> > checking if C and Fortran are link compatible... yes
> > checking to see if Fortran compiler likes the C++ exception flags...
> skipped (no C++ exceptions flags)
> > checking to see if mpifort compiler needs additional linker flags... none
> > checking if Fortran compiler supports CHARACTER... yes
> > checking size of Fortran CHARACTER... 1
> > checking for C type corresponding to CHARACTER... char
> > checking alignment of Fortran CHARACTER... 1
> > checking for corresponding KIND value of CHARACTER... C_SIGNED_CHAR
> > checking KIND value of Fortran C_SIGNED_CHAR... no ISO_C_BINDING --
> fallback
> > checking Fortran value of selected_int_kind(4)... no
> > configure: WARNING: Could not determine KIND value of C_SIGNED_CHAR
> > configure: WARNING: See config.log for more details
> > configure: error: Cannot continue
>
> Please send the full configure output as well as the config.log file
> (please compress).
>
> > Ekopath failed during make with the following error:
> > PPFC mpi-f08-sizeof.lo
> > PPFC mpi-f08.lo
> > In file included from mpi-f08.F90:37:
> > mpi-f-interfaces-bind.h:1908: warning: extra tokens at end of #endif
> directive
> > mpi-f-interfaces-bind.h:2957: warning: extra tokens at end of #endif
> directive
> > In file included from mpi-f08.F90:38:
> > pmpi-f-interfaces-bind.h:1911: warning: extra tokens at end of #endif
> directive
> > pmpi-f-interfaces-bind.h:2963: warning: extra tokens at end of #endif
> directive
> > pathf95-1044 pathf95: INTERNAL OMPI_OP_CREATE_F, File =
> mpi-f-interfaces-bind.h, Line = 955, Column = 29
> > Internal : Unexpected ATP_PGM_UNIT in check_interoperable_pgm_unit()
>
> I've pinged Pathscale about this.
>
> > 2) I've ran a couple of tests (IMB) with the new version. I ran this on
> a system consisting of 10 nodes with Intel SB processor and fdr ConnectX3
> infiniband adapters.
> > First I've tried the following parameters:
> > mpirun -np $NP -hostfile hosts --mca btl openib,sm,self --bind-to-core
> -npernode 16 --mca mpi_leave_pinned 1 ./IMB-MPI1 -npmin $NP -mem 4G $COLL
> > This combination complained about mca_leave_pinned. The same line works
> for 1.6.3. Is something different in the new release and I've missed it?
> >
> --------------------------------------------------------------------------
> > A process attempted to use the "leave pinned" MPI feature, but no
> > memory registration hooks were found on the system at run time. This
> > may be the result of running on a system that does not support memory
> > hooks or having some other software subvert Open MPI's use of the
> > memory hooks. You can disable Open MPI's use of memory hooks by
> > setting both the mpi_leave_pinned and mpi_leave_pinned_pipeline MCA
> > parameters to 0.
>
> This should not be, and might explain your lower performance on the IMB
> results.
>
> Nathan -- you reported that you saw something like this before, but were
> then unable to reproduce. Any ideas what's going on here? Mellanox?
>
> (although the short message latency is troubling...)
>
> Can you ensure that you aren't using MXM in 1.7? I understand that its
> short message latency is worse than RC verbs. You'll need to add "--mca
> pml ob1" in your command line.
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>