Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Allan Menezes (amenezes007_at_[hidden])
Date: 2006-08-17 03:34:56


Hi AnyOne,
     Soory about the SPAM. But I tried Mpich2-1.0.4p1 and got ^ Gflops
this way.
I first tried mpich2 configured with --with-device=ch3 and
-with-comm=ch3:shm ans then make, make install. I then tried the first
expt. I ran # mpd --ncpus=2 & and #.mpiexec -np 1 -f hosts
where hosts is the file defined below in my email. I got the same
4.00GFlops. HPL.dat was set at P=1 Q=1 for 1 node with 2 processors.
I then modified HPL.dat to P=1 and Q=2 for the same hosts file of one
machine of a18.lightning.net and got 6GigaFlops performance. I shall try
the same with open mpi modifying the HPL.dat P's and Q's
for open mpi and post my results for open mpi. Remember I am trying this
out on a single dual core machine with SMP kernel.

For Open Mpi below in both cases with slots=2 or slots=1 I tried the
HPL.dat with P=1 and Q=1. I shall now try with P=1 and Q=2 with slots=2
in the hosts file.
 
    I have an 18 node cluster of heterogenous machines. I used fc5 smp
kernel and ocsar 5.0 beta.
I tried the following out on a machine with Open mpi 1.1 and 1.1.1b4
versions. The machine consists of a Dlink 1gigb/s DGE-530T etherent card
2.66GHz dual core Intel Cpu Pentium D 805 with Dual Cannel 1 gig DDR
3200 ram. I compiled the ATLAS libs (ver 3.7.13beta) for this machine
and HPL (xhpl executable) and ran the following experiment twice:
content of my "hosts" file1 for this machine for 1st experiment:
a8.lightning.net slots=2
content of my "hosts" file2 for this machine for 2nd experiment:
a8.lightning.net

On the single node I ran for HPL.dat N =6840 and NB=120 : 1024 MB of Ram
N = sqrt(0.75* ((1024-32 video overhead)/2 )*1000000*1/8)=approx 6840;
512MB Ram per CPU otherwise the OS uses the hard drive for virtaul
memory. This way it resides totally in Ram.
I ran this command twice for the two different hosts files above in two
experiments:
# mpirun --prefix /opt/openmpi114 --hostsfile hosts -mca btl tcp, self
-np 1 ./xhpl
In both cases the performance remains the same around 4.040 GFlops I
would expect since I am running slots =2 as two CPU's I would get a
performance increase from expt 2 by 100 -50%
But I see no difference.Can anybody tell me why this is so?
I have not tried mpich 2.
Thank you,
Regards,
Allan Menezes