Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] Lower performance on a Gigabit node compared to infiniband node
From: Sangamesh B (forum.san_at_[hidden])
Date: 2009-03-09 10:30:02


Dear Open MPI team,

      With Open MPI-1.3, the fortran application CPMD is installed on
Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores
per node)

Two jobs (4 processes job) are run on two nodes, separately - one node
has a ib connection ( 4 GB RAM) and the other node has gigabit
connection (8 GB RAM).

Note that, the network-connectivity may not be or not required to be
used as the two jobs are running in stand alone mode.

Since the jobs are running on single node - no intercommunication
between nodes - so the performance of both the jobs should be same
irrespective of network connectivity. But here this is not the case.
The gigabit job is taking double the time of infiniband job.

Following are the details of two jobs:

Infiniband Job:

      CPU TIME : 0 HOURS 10 MINUTES 21.71 SECONDS
   ELAPSED TIME : 0 HOURS 10 MINUTES 23.08 SECONDS
 *** CPMD| SIZE OF THE PROGRAM IS 301192/ 571044 kBYTES ***

Gigabit Job:

       CPU TIME : 0 HOURS 12 MINUTES 7.93 SECONDS
   ELAPSED TIME : 0 HOURS 21 MINUTES 0.07 SECONDS
 *** CPMD| SIZE OF THE PROGRAM IS 123420/ 384344 kBYTES ***

More details are attached here in a file.

Why there is a long difference between CPU TIME and ELAPSED TIME for
Gigabit job?

This could be an issue with Open MPI itself. What could be the reason?

Is there any flags need to be set?

Thanks in advance,
Sangamesh