This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I'm not sure - just fishing for possible answers. When we see high cpu usage, it usually occurs during MPI communications - when a process is waiting for a message to arrive, it polls at a high rate to keep the latency as low as possible. Since you have one process "sleep" before calling the finalize sequence, it could be that the other process is getting held up on a receive and thus eating the cpu.
There really isn't anything special going on during Init/Finalize, and OMPI itself doesn't have any MPI communications in there. I'm not familiar with MUMPS, but if MUMPS finalize is doing something like an MPI_Barrier to ensure the procs finalize together, then that would explain what you see. The docs I could find imply there is some MPI embedded in MUMPS, but I couldn't find anything specific about finalize.
On Oct 25, 2012, at 6:43 PM, tmishima_at_[hidden] wrote:
> Hi Ralph,
> do you really mean "MUMPS finalize"? I don't think it has much relation
> this behavior?
> Anyway, I'm just a mumps user. I have to ask mumps developers about what
> initailize and finalize does.
>> Out of curiosity, what does MUMPS finalize do? Does it send a message or
> do a barrier operation?
>> On Oct 25, 2012, at 5:53 PM, tmishima_at_[hidden] wrote:
>>> I find that system CPU time of openmpi-1.7rc1 is quite different with
>>> that of openmpi-1.6.2 as shown in the attached ganglia display.
>>> About 2 years ago, I reported a similar behavior of openmpi-1.4.3.
>>> The testing method is what I used at that time.
>>> (please see my post entitled "SYSTEM CPU with OpenMPI 1.4.3")
>>> Is this due to a pre-released version's check routine or does
>>> something go wrong?
>>> Best regards,
>>> Tetsuya Mishima
>>> Testing program:
>>> INCLUDE 'mpif.h'
>>> INCLUDE 'dmumps_struc.h'
>>> TYPE (DMUMPS_STRUC) MUMPS_PAR
>>> MUMPS_PAR%COMM = MPI_COMM_WORLD
>>> MUMPS_PAR%SYM = 1
>>> MUMPS_PAR%PAR = 1
>>> MUMPS_PAR%JOB = -1 ! INITIALIZE MUMPS
>>> CALL MPI_INIT(IERR)
>>> CALL DMUMPS(MUMPS_PAR)
>>> CALL MPI_COMM_RANK( MPI_COMM_WORLD, MYID, IERR )
>>> IF ( MYID .EQ. 0 ) CALL SLEEP(180) ! WAIT 180 SEC.
>>> MUMPS_PAR%JOB = -2 ! FINALIZE MUMPS
>>> CALL DMUMPS(MUMPS_PAR)
>>> CALL MPI_FINALIZE(IERR)
>>> ( This does nothing but just calls intialize & finalize
>>> routine of MUMPS & MPI)
>>> command line : mpirun -host node03 -np 16 ./testrun
>>> (See attached file:
>>> users mailing list
>> users mailing list
> users mailing list