Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Sven Stork (stork_at_[hidden])
Date: 2006-08-14 08:11:14


Problem solved.

The program has a bug. Instead of using the status variable (array) the
program uses the stat variable (scalar). Therefore there is a buffer
overflow.
 

On Monday 14 August 2006 09:58, Sven Stork wrote:
> The problem is that after the MPI_Recv call the loop
> variable is set to 0, and therefore the loop doesn't stop
> as supposed to be for 2 processes.
>
> If you use another temp. varaible as parameter the program
> works. The strange thing is that temp. variable wont be changed and still
> contains the origin value while the loop variable still gets changed (this
> time to a value higher than 0).
>
> On Friday 11 August 2006 17:07, Jeff Squyres wrote:
> > I'm not quite sure I understand -- does the application hang in an MPI
call?
> > Or is there some compiler error that is causing it to execute a DO loop
> > incorrectly?
> >
> >
> > On 8/11/06 6:25 AM, "Sven Stork" <stork_at_[hidden]> wrote:
> >
> > > The real problem is not the -g it is the -O0 option which will be
> > > automatically added by -g. If you compile with "-g -ON" for 0 < N
> everythings
> > > works as expected.
> > >
> > > Thanks,
> > > Sven
> > >
> > > On Friday 11 August 2006 11:54, Bettina Krammer wrote:
> > >> Hi,
> > >>
> > >> when I use the attached hello.f with Open MPI 1.1.0 and underlying
Intel
> > >> 9.0 or 9.1 compiler on our Xeon cluster, it is deadlocking when
compiled
> > >> with -g option but works without -g:
> > >>
> > >> ===================
> > >> output with -g:
> > >>
> > >> $mpirun -np 2 ./hello-g
> > >>
> > >> My rank is 0 !
> > >> waiting for message from 1
> > >> My rank is 1 !
> > >> Greetings from process 1 !
> > >> Sending message from 1 !
> > >> Message recieved: HelloFromMexxxxxxxxx!
> > >> waiting for message from 1
> > >>
> > >> [...deadlock...]
> > >> ===================
> > >>
> > >> output without -g:
> > >>
> > >> $mpirun -np 2 ./hello-no-g
> > >>
> > >> My rank is 0 !
> > >> waiting for message from 1
> > >> My rank is 1 !
> > >> Greetings from process 1 !
> > >> Sending message from 1 !
> > >> Message recieved: HelloFromMexxxxxxxxx!
> > >> All done... 0
> > >> All done... 1
> > >> ===================
> > >>
> > >> Thanks, Bettina Krammer
> > >>
> > >> (The example is taken from the distribution of DDT, to be found in
> > >> ddt/examples. The problem is reproducible with the simplified
> > >> hello-simple.f. The deadlock occurs in the DO source... MPI_Recv(...)
> > >> .... loop)
> > >> ===================
> > >> The config.log is not available to me.
> > >>
> > >> hpc43203 cacau1 219$ompi_info
> > >> Open MPI: 1.1
> > >> Open MPI SVN revision: r10477
> > >> Open RTE: 1.1
> > >> Open RTE SVN revision: r10477
> > >> OPAL: 1.1
> > >> OPAL SVN revision: r10477
> > >> Prefix: /opt/OpenMPI/1.1.0/
> > >> Configured architecture: x86_64-unknown-linux-gnu
> > >> Configured by: hpcraink
> > >> Configured on: Mon Jul 31 12:55:30 CEST 2006
> > >> Configure host: cacau1
> > >> Built by: hpcraink
> > >> Built on: Mon Jul 31 13:16:04 CEST 2006
> > >> Built host: cacau1
> > >> C bindings: yes
> > >> C++ bindings: yes
> > >> Fortran77 bindings: yes (all)
> > >> Fortran90 bindings: yes
> > >> Fortran90 bindings size: small
> > >> C compiler: icc
> > >> C compiler absolute: /opt/intel/compiler/9.1/cce/bin/icc
> > >> C++ compiler: icpc
> > >> C++ compiler absolute: /opt/intel/compiler/9.1/cce/bin/icpc
> > >> Fortran77 compiler: ifc
> > >> Fortran77 compiler abs: /opt/intel/compiler/9.1/fce/bin/ifc
> > >> Fortran90 compiler: ifc
> > >> Fortran90 compiler abs: /opt/intel/compiler/9.1/fce/bin/ifc
> > >> C profiling: yes
> > >> C++ profiling: yes
> > >> Fortran77 profiling: yes
> > >> Fortran90 profiling: yes
> > >> C++ exceptions: no
> > >> Thread support: posix (mpi: no, progress: no)
> > >> Internal debug support: no
> > >> MPI parameter check: runtime
> > >> Memory profiling support: no
> > >> Memory debugging support: no
> > >> libltdl support: yes
> > >> MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component
v1.1)
> > >> MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA maffinity: first_use (MCA v1.0, API v1.0, Component
v1.1)
> > >> MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA timer: linux (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
> > >> MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
> > >> MCA coll: basic (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA coll: hierarch (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA coll: self (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA coll: sm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA io: romio (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA mpool: mvapi (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA btl: self (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA btl: sm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA btl: mvapi (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
> > >> MCA topo: unity (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
> > >> MCA gpr: null (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA iof: svc (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA ns: replica (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
> > >> MCA ras: dash_host (MCA v1.0, API v1.0, Component
v1.1)
> > >> MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA ras: localhost (MCA v1.0, API v1.0, Component
v1.1)
> > >> MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA ras: tm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA rmaps: round_robin (MCA v1.0, API v1.0, Component
> v1.1)
> > >> MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA rml: oob (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA pls: fork (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA pls: tm (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA sds: env (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA sds: seed (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA sds: singleton (MCA v1.0, API v1.0, Component
v1.1)
> > >> MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1)
> > >> MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1)
> > >>
> > >>
> > >> --
> > >> ---------------------------------------------
> > >> * NEW PHONE AND FAX-NUMBERS *
> > >> ---------------------------------------------
> > >> Dipl.-Math. Bettina Krammer
> > >> High Performance Computing Center (HLRS)
> > >> University of Stuttgart
> > >> Nobelstrasse 19
> > >> D-70569 Stuttgart
> > >>
> > >> Phone: ++49 (0)711-685-65890
> > >> Fax: ++49 (0)711-685-65832
> > >> email: krammer_at_[hidden]
> > >> URL: http://www.hlrs.de
> > >> ---------------------------------------------
> > >>
> > > _______________________________________________
> > > devel mailing list
> > > devel_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> >
> >
> > --
> > Jeff Squyres
> > Server Virtualization Business Unit
> > Cisco Systems
> > _______________________________________________
> > devel mailing list
> > devel_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> >
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>