Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Factor of 10 loss in performance with 1.3.x
From: Steve Kargl (sgk_at_[hidden])
Date: 2009-04-07 15:39:18


On Tue, Apr 07, 2009 at 03:18:31PM -0400, George Bosilca wrote:
> Steve,
>
> I spotted a strange value for the mpi_yield_when_idle MCA parameter. 1
> means your processor is oversubscribed, and this trigger a call to
> sched_yield after each check on the SM. Are you running the job
> oversubscribed? If not it looks like somehow we don't correctly
> identify that there are multiple cores ...
>
> george.
>

The node is not oversubscribed. Here's top(1) output

last pid: 90265; load averages: 0.79, 0.40, 0.28 up 89+18:30:27 12:33:36
31 processes: 3 running, 28 sleeping
CPU: 2.3% user, 0.0% nice, 25.8% system, 0.4% interrupt, 71.5% idle
Mem: 27M Active, 28G Inact, 748M Wired, 1304M Cache, 617M Buf, 840M Free
Swap: 4096M Total, 4096M Free

  PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
90264 kargl 1 101 0 170M 12012K CPU1 1 0:32 98.46% z
90265 kargl 1 101 0 170M 4320K CPU2 2 0:32 97.01% z
  756 root 1 4 0 4668K 928K - 7 8:36 0.00% nfsd
  757 root 1 4 0 4668K 932K - 7 7:57 0.00% nfsd

z is the netpipe executable. This node has 2 quad-core opteron
processors.

I also see the slowdown if I used node19 instead of node20. 19 and
20 are identical blades.
 

-- 
Steve