Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_COMM_DUP freeze with OpenMPI 1.4.1
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-05-19 22:04:46


On May 13, 2011, at 8:31 AM, francoise.roch_at_[hidden] wrote:

> Here is the MUMPS portion of code (in zmumps_part1.F file) where the slaves call MPI_COMM_DUP , id%PAR and MASTER are initialized to 0 before :
>
> CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )

I re-indented so that I could read it better:

      CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
      IF ( id%PAR .eq. 0 ) THEN
         IF ( id%MYID .eq. MASTER ) THEN
            color = MPI_UNDEFINED
         ELSE
            color = 0
         END IF
         CALL MPI_COMM_SPLIT( id%COMM, color, 0,
         & id%COMM_NODES, IERR )
         id%NSLAVES = id%NPROCS - 1
      ELSE
         CALL MPI_COMM_DUP( id%COMM, id%COMM_NODES, IERR )
         id%NSLAVES = id%NPROCS
      END IF

      IF (id%PAR .ne. 0 .or. id%MYID .NE. MASTER) THEN
         CALL MPI_COMM_DUP( id%COMM_NODES, id%COMM_LOAD, IERR
      ENDIF

That doesn't look right -- both MPI_COMM_SPLIT and MPI_COMM_DUP are collective, meaning that all processes in the communicator must call them. In the first case, only some processes are calling MPI_COMM_SPLIT. Is there some other logic that forces the rest of the processes to call MPI_COMM_SPLIT, too?

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/