Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Lower performance on a Gigabit node compared to infiniband node
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-03-09 10:35:17


Isn't this a re-posting of an email thread we already addressed?

On Mar 9, 2009, at 8:30 AM, Sangamesh B wrote:

> Dear Open MPI team,
>
> With Open MPI-1.3, the fortran application CPMD is installed on
> Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores
> per node)
>
> Two jobs (4 processes job) are run on two nodes, separately - one node
> has a ib connection ( 4 GB RAM) and the other node has gigabit
> connection (8 GB RAM).
>
> Note that, the network-connectivity may not be or not required to be
> used as the two jobs are running in stand alone mode.
>
> Since the jobs are running on single node - no intercommunication
> between nodes - so the performance of both the jobs should be same
> irrespective of network connectivity. But here this is not the case.
> The gigabit job is taking double the time of infiniband job.
>
> Following are the details of two jobs:
>
> Infiniband Job:
>
> CPU TIME : 0 HOURS 10 MINUTES 21.71 SECONDS
> ELAPSED TIME : 0 HOURS 10 MINUTES 23.08 SECONDS
> *** CPMD| SIZE OF THE PROGRAM IS 301192/ 571044 kBYTES ***
>
> Gigabit Job:
>
> CPU TIME : 0 HOURS 12 MINUTES 7.93 SECONDS
> ELAPSED TIME : 0 HOURS 21 MINUTES 0.07 SECONDS
> *** CPMD| SIZE OF THE PROGRAM IS 123420/ 384344 kBYTES ***
>
> More details are attached here in a file.
>
> Why there is a long difference between CPU TIME and ELAPSED TIME for
> Gigabit job?
>
> This could be an issue with Open MPI itself. What could be the reason?
>
> Is there any flags need to be set?
>
> Thanks in advance,
> Sangamesh
> <cpmd_gb_ib_1node>_______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users