Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] System CPU of openmpi-1.7rc1
From: tmishima_at_[hidden]
Date: 2012-10-28 23:38:46


Hi,

I made my testing program simpler as shown below.

I compared openmpi-1.6.2 and openmpi1.7rc1/4 again
in system cpu usage while some processes wait for
others.

Then, the result is same as reported bofore.

               system cpu usage
openmpi-1.6.2 0%
openmpi-1.7rc1 70%
openmpi-1.7rc4 70%

My question is why openmpi1.7rc is so different from
openmpi-1.6.2 in system cpu usage. Is this the intended
behavior?

      INCLUDE 'mpif.h'
      CALL MPI_INIT(IERR)
c
      CALL MPI_COMM_RANK( MPI_COMM_WORLD, MYID, IERR )
      IF ( MYID .EQ. 0 ) CALL SLEEP(180) ! WAIT 180 SEC.
c
      ISND = 1
      CALL MPI_ALLREDUCE(ISND,IRCV,1,MPI_INTEGER,MPI_SUM,
     +MPI_COMM_WORLD,IERR)
      CALL MPI_FINALIZE(IERR)
c
      END

Regards,
tmishima

> I'm not sure - just fishing for possible answers. When we see high cpu
usage, it usually occurs during MPI communications - when a process is
waiting for a message to arrive, it polls at a high rate
> to keep the latency as low as possible. Since you have one process
"sleep" before calling the finalize sequence, it could be that the other
process is getting held up on a receive and thus eating the
> cpu.
>
> There really isn't anything special going on during Init/Finalize, and
OMPI itself doesn't have any MPI communications in there. I'm not familiar
with MUMPS, but if MUMPS finalize is doing something
> like an MPI_Barrier to ensure the procs finalize together, then that
would explain what you see. The docs I could find imply there is some MPI
embedded in MUMPS, but I couldn't find anything specific
> about finalize.
>
>
> On Oct 25, 2012, at 6:43 PM, tmishima_at_[hidden] wrote:
>
> >
> >
> > Hi Ralph,
> >
> > do you really mean "MUMPS finalize"? I don't think it has much relation
> > with
> > this behavior?
> >
> > Anyway, I'm just a mumps user. I have to ask mumps developers about
what
> > MUMPS
> > initailize and finalize does.
> >
> > Regartds,
> > tmishima
> >
> >> Out of curiosity, what does MUMPS finalize do? Does it send a message
or
> > do a barrier operation?
> >>
> >>
> >> On Oct 25, 2012, at 5:53 PM, tmishima_at_[hidden] wrote:
> >>
> >>>
> >>>
> >>> Hi,
> >>>
> >>> I find that system CPU time of openmpi-1.7rc1 is quite different with
> >>> that of openmpi-1.6.2 as shown in the attached ganglia display.
> >>>
> >>> About 2 years ago, I reported a similar behavior of openmpi-1.4.3.
> >>> The testing method is what I used at that time.
> >>> (please see my post entitled "SYSTEM CPU with OpenMPI 1.4.3")
> >>>
> >>> Is this due to a pre-released version's check routine or does
> >>> something go wrong?
> >>>
> >>> Best regards,
> >>> Tetsuya Mishima
> >>>
> >>> ------------------
> >>> Testing program:
> >>> INCLUDE 'mpif.h'
> >>> INCLUDE 'dmumps_struc.h'
> >>> TYPE (DMUMPS_STRUC) MUMPS_PAR
> >>> c
> >>> MUMPS_PAR%COMM = MPI_COMM_WORLD
> >>> MUMPS_PAR%SYM = 1
> >>> MUMPS_PAR%PAR = 1
> >>> MUMPS_PAR%JOB = -1 ! INITIALIZE MUMPS
> >>> CALL MPI_INIT(IERR)
> >>> CALL DMUMPS(MUMPS_PAR)
> >>> c
> >>> CALL MPI_COMM_RANK( MPI_COMM_WORLD, MYID, IERR )
> >>> IF ( MYID .EQ. 0 ) CALL SLEEP(180) ! WAIT 180 SEC.
> >>> c
> >>> MUMPS_PAR%JOB = -2 ! FINALIZE MUMPS
> >>> CALL DMUMPS(MUMPS_PAR)
> >>> CALL MPI_FINALIZE(IERR)
> >>> c
> >>> END
> >>> ( This does nothing but just calls intialize & finalize
> >>> routine of MUMPS & MPI)
> >>>
> >>> command line : mpirun -host node03 -np 16 ./testrun
> >>>
> >>> (See attached file:
> >
openmpi17rc1-cmp.bmp)<openmpi17rc1-cmp.bmp>_______________________________________________

> >
> >>> users mailing list
> >>> users_at_[hidden]
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >>
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>