Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Bettina Krammer (krammer_at_[hidden])
Date: 2006-08-11 05:54:40


Hi,

when I use the attached hello.f with Open MPI 1.1.0 and underlying Intel
9.0 or 9.1 compiler on our Xeon cluster, it is deadlocking when compiled
with -g option but works without -g:

===================
output with -g:

$mpirun -np 2 ./hello-g

My rank is 0 !
waiting for message from 1
My rank is 1 !
Greetings from process 1 !
Sending message from 1 !
Message recieved: HelloFromMexxxxxxxxx!
waiting for message from 1

      [...deadlock...]
===================

 output without -g:

$mpirun -np 2 ./hello-no-g

My rank is 0 !
 waiting for message from 1
 My rank is 1 !
 Greetings from process 1 !
 Sending message from 1 !
 Message recieved: HelloFromMexxxxxxxxx!
 All done... 0
 All done... 1
===================

Thanks, Bettina Krammer

(The example is taken from the distribution of DDT, to be found in
ddt/examples. The problem is reproducible with the simplified
hello-simple.f. The deadlock occurs in the DO source... MPI_Recv(...)
.... loop)
===================
The config.log is not available to me.

hpc43203 cacau1 219$ompi_info
                Open MPI: 1.1
   Open MPI SVN revision: r10477
                Open RTE: 1.1
   Open RTE SVN revision: r10477
                    OPAL: 1.1
       OPAL SVN revision: r10477
                  Prefix: /opt/OpenMPI/1.1.0/
 Configured architecture: x86_64-unknown-linux-gnu
           Configured by: hpcraink
           Configured on: Mon Jul 31 12:55:30 CEST 2006
          Configure host: cacau1
                Built by: hpcraink
                Built on: Mon Jul 31 13:16:04 CEST 2006
              Built host: cacau1
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: icc
     C compiler absolute: /opt/intel/compiler/9.1/cce/bin/icc
            C++ compiler: icpc
   C++ compiler absolute: /opt/intel/compiler/9.1/cce/bin/icpc
      Fortran77 compiler: ifc
  Fortran77 compiler abs: /opt/intel/compiler/9.1/fce/bin/ifc
      Fortran90 compiler: ifc
  Fortran90 compiler abs: /opt/intel/compiler/9.1/fce/bin/ifc
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
              MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.1)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1)
           MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.1)
           MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.1)
               MCA timer: linux (MCA v1.0, API v1.0, Component v1.1)
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.1)
                MCA coll: hierarch (MCA v1.0, API v1.0, Component v1.1)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.1)
                MCA coll: sm (MCA v1.0, API v1.0, Component v1.1)
                MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.1)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1)
               MCA mpool: mvapi (MCA v1.0, API v1.0, Component v1.1)
                 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1)
                 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1)
              MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1)
                 MCA btl: self (MCA v1.0, API v1.0, Component v1.1)
                 MCA btl: sm (MCA v1.0, API v1.0, Component v1.1)
                 MCA btl: mvapi (MCA v1.0, API v1.0, Component v1.1)
                 MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.1)
                 MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
                 MCA gpr: null (MCA v1.0, API v1.0, Component v1.1)
                 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1)
                 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1)
                 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1)
                 MCA iof: svc (MCA v1.0, API v1.0, Component v1.1)
                  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1)
                  MCA ns: replica (MCA v1.0, API v1.0, Component v1.1)
                 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                 MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.1)
                 MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.1)
                 MCA ras: localhost (MCA v1.0, API v1.0, Component v1.1)
                 MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1)
                 MCA ras: tm (MCA v1.0, API v1.0, Component v1.1)
                 MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.1)
                 MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1)
               MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.1)
                MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1)
                MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1)
                 MCA rml: oob (MCA v1.0, API v1.0, Component v1.1)
                 MCA pls: fork (MCA v1.0, API v1.0, Component v1.1)
                 MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1)
                 MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1)
                 MCA pls: tm (MCA v1.0, API v1.0, Component v1.1)
                 MCA sds: env (MCA v1.0, API v1.0, Component v1.1)
                 MCA sds: seed (MCA v1.0, API v1.0, Component v1.1)
                 MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1)
                 MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1)
                 MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1)

-- 
---------------------------------------------
* NEW PHONE AND FAX-NUMBERS *
---------------------------------------------
Dipl.-Math. Bettina Krammer
High Performance Computing Center (HLRS)
University of Stuttgart
Nobelstrasse 19
D-70569 Stuttgart
Phone: ++49 (0)711-685-65890
Fax: ++49 (0)711-685-65832
email: krammer_at_[hidden]
URL: http://www.hlrs.de
---------------------------------------------

c123456
       INTEGER FUNCTION FUNC1 ()
         INTEGER my_int, your_int
         my_int=2
         your_int=3
         FUNC1=my_int*your_int
       END
        
       SUBROUTINE SUB1 ()
         INTEGER test,FUNC1
         test=FUNC1()
         IF (test.eq.1) THEN
           test=0
         ELSE
           test=-1
         END IF
       END
       
       PROGRAM hellof77
       include 'mpif.h'
       
       INTEGER i,my_rank,p,source,dest,tag,x,y,beingwatched,ierr,my_size
       CHARACTER message*21
       CHARACTER messagefirst
       INTEGER goat(10)
       INTEGER status(MPI_STATUS_SIZE)
       INTEGER stat
       
       CALL MPI_INIT(ierr)
       CALL MPI_COMM_SIZE(MPI_COMM_WORLD, my_size, ierr)
       CALL MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr)
       
       IF (my_size.eq.8) THEN
          IF (my_rank.eq.5) THEN
            CALL MPI_SEND(message,400,MPI_CHARACTER,dest,tag,MPI_COMM_W
     cORLD,ierr)
          END IF
       END IF
       message="HelloFromMexxxxxxxxx!"
       
       PRINT *,"My rank is ",my_rank,"!"
       
       CALL SUB1()

       DO x=1,(my_rank-1)
        goat(x)=my_rank*x
       END DO
       
       beingwatched=1
       tag=0
       
       IF (my_rank.ne.0) THEN
         PRINT *,"Greetings from process ",my_rank,"!"
         PRINT *,"Sending message from ",my_rank,"!"
         dest=0
         CALL MPI_Send(message,21,MPI_CHARACTER,dest,tag,MPI_COMM_WORLD
     c,ierr)
         beingwatched=beingwatched-1
       ELSE
         message="Hello from my process"
         DO source=1,(my_size-1)
           PRINT *,"waiting for message from ",source
        CALL MPI_Recv(message,21,MPI_CHARACTER,source,tag,MPI_COMM_WORLD
     c,stat,ierr)
           PRINT *,"Message recieved: ",message
           beingwatched=beingwatched+1
         END DO
       END IF
       
       beingwatched=12
       CALL MPI_Finalize(ierr)
       beingwatched=0
       PRINT *,"All done...",my_rank
       END

c123456
       
       PROGRAM hellof77
       include 'mpif.h'
       
       INTEGER my_rank,source,dest,tag,ierr,my_size
       CHARACTER message*21
       INTEGER stat
       
       CALL MPI_INIT(ierr)
       CALL MPI_COMM_SIZE(MPI_COMM_WORLD, my_size, ierr)
       CALL MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr)
       
       message="HelloFromMexxxxxxxxx!"
       
       PRINT *,"My rank is ",my_rank,"!"
       
       tag=0
       
       IF (my_rank.ne.0) THEN
         PRINT *,"Greetings from process ",my_rank,"!"
         PRINT *,"Sending message from ",my_rank,"!"
         dest=0
         CALL MPI_Send(message,21,MPI_CHARACTER,dest,tag,MPI_COMM_WORLD
     c,ierr)
       ELSE
         message="Hello from my process"
         DO source=1,(my_size-1)
           PRINT *,"waiting for message from ",source
        CALL MPI_Recv(message,21,MPI_CHARACTER,source,tag,MPI_COMM_WORLD
     c,stat,ierr)
           PRINT *,"Message recieved: ",message
         END DO
       END IF
       
       CALL MPI_Finalize(ierr)
       PRINT *,"All done...",my_rank
       END