Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Hugh Merz (merz_at_[hidden])
Date: 2006-08-17 01:16:04


On Wed, 16 Aug 2006, Allan Menezes wrote:
> Hi AnyOne,
> I have an 18 node cluster of heterogenous machines. I used fc5 smp
> kernel and ocsar 5.0 beta.
> I tried the following out on a machine with Open mpi 1.1 and 1.1.1b4
> versions. The machine consists of a Dlink 1gigb/s DGE-530T etherent card
> 2.66GHz dual core Intel Cpu Pentium D 805 with Dual Cannel 1 gig DDR
> 3200 ram. I compiled the ATLAS libs (ver 3.7.13beta) for this machine
> and HPL (xhpl executable) and ran the following experiment twice:
> content of my "hosts" file1 for this machine for 1st experiment:
> a8.lightning.net slots=2
> content of my "hosts" file2 for this machine for 2nd experiment:
> a8.lightning.net
>
> On the single node I ran for HPL.dat N =6840 and NB=120 : 1024 MB of Ram
> N = sqrt(0.75* ((1024-32 video overhead)/2 )*1000000*1/8)=approx 6840;
> 512MB Ram per CPU otherwise the OS uses the hard drive for virtaul
> memory. This way it resides totally in Ram.
> I ran this command twice for the two different hosts files above in two
> experiments:
> # mpirun --prefix /opt/openmpi114 --hostsfile hosts -mca btl tcp, self
> -np 1 ./xhpl
> In both cases the performance remains the same around 4.040 GFlops I
> would expect since I am running slots =2 as two CPU's I would get a
> performance increase from expt 2 by 100 -50%
> But I see no difference.Can anybody tell me why this is so?

You are only launching 1 process in both cases. Try `mpirun -np 2 ...` to launch 2 processes, which will load each of your processors with an xhpl process.

Please read the faq:

http://www.open-mpi.org/faq/?category=running#simple-spmd-run

It includes a lot of information about slots and how they should be set as well.

Hugh

> I have not tried mpich 2.
> Thank you,
> Regards,
> Allan Menezes
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>