Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Allan Menezes (amenezes007_at_[hidden])
Date: 2006-08-16 22:55:27

Hi AnyOne,
      I have an 18 node cluster of heterogenous machines. I used fc5 smp
kernel and ocsar 5.0 beta.
I tried the following out on a machine with Open mpi 1.1 and 1.1.1b4
versions. The machine consists of a Dlink 1gigb/s DGE-530T etherent card
2.66GHz dual core Intel Cpu Pentium D 805 with Dual Cannel 1 gig DDR
3200 ram. I compiled the ATLAS libs (ver 3.7.13beta) for this machine
and HPL (xhpl executable) and ran the following experiment twice:
content of my "hosts" file1 for this machine for 1st experiment: slots=2
content of my "hosts" file2 for this machine for 2nd experiment:

On the single node I ran for HPL.dat N =6840 and NB=120 : 1024 MB of Ram
N = sqrt(0.75* ((1024-32 video overhead)/2 )*1000000*1/8)=approx 6840;
512MB Ram per CPU otherwise the OS uses the hard drive for virtaul
memory. This way it resides totally in Ram.
I ran this command twice for the two different hosts files above in two
# mpirun --prefix /opt/openmpi114 --hostsfile hosts -mca btl tcp, self
-np 1 ./xhpl
In both cases the performance remains the same around 4.040 GFlops I
would expect since I am running slots =2 as two CPU's I would get a
performance increase from expt 2 by 100 -50%
But I see no difference.Can anybody tell me why this is so?
I have not tried mpich 2.
Thank you,
Allan Menezes