Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Low performance of Open MPI-1.3 over Gigabit
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-03-05 18:00:34


On Mar 5, 2009, at 1:29 PM, Jeff Squyres wrote:

> On Mar 5, 2009, at 1:54 AM, Sangamesh B wrote:
>
>> The fortran application I'm using here is the CPMD-3.11.
>>
>> I don't think the processor is Nehalem:
>>
>> Intel(R) Xeon(R) CPU X5472 @ 3.00GHz
>>
>> Installation procedure was same on both the clusters. I've not set
>> mpi_affinity.
>>
>> This is a memory intensive application, but this job was not using
>> that much amount of memory.
>>
>> Regarding CPU & ELAPSED TIMEs, the CPU TIME should be greater than
>> ELAPSED TIME in general (for a parallel program). Right?
>>
>
> It depends on exactly what you're reporting for ELAPSED time. Is
> that wall clock time? Or user time? Or something else?
>
> Ralph and I disagree on this point, but my opinion is that the only
> meaningful time reported in a parallel application is the wall clock
> time. The CPU time can be badly skewed by a variety of things, such
> as any filesystem IO, network activity (depending on whether you
> have an OS-bypass network or not), etc.

You have gradually worn me down on this point... :D

>
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users