Victor,

Build the FT benchmark and build it as a class B problem. This will run in the 1-2 minute range instead of 2-4 seconds the CG class A benchmark does.


Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas



Terry Frankcombe wrote:
Hi Victor

I'd suggest 3 seconds of CPU time is far, far to small a problem to do
scaling tests with.  Even with only 2 CPUs, I wouldn't go below 100
times that.


On Mon, 2007-06-11 at 01:10 -0700, victor marian wrote:
  
Hi Jeff

I ran the NAS Parallel Bechmark and it gives for me
-bash%/export/home/vmarian/fortran/benchmarks/NPB3.2/NPB3.2-MPI/bin$
mpirun -np 1 cg.A.1
--------------------------------------------------------------------------
[0,1,0]: uDAPL on host SERVSOLARIS was unable to find
any NICs.
Another transport will be used instead, although this
may result in
lower performance.
--------------------------------------------------------------------------
 NAS Parallel Benchmarks 3.2 -- CG Benchmark

 Size:      14000
 Iterations:    15
 Number of active processes:     1
 Number of nonzeroes per row:       11
 Eigenvalue shift: .200E+02
 Benchmark completed
 VERIFICATION SUCCESSFUL
 Zeta is      0.171302350540E+02
 Error is     0.512264003323E-13


 CG Benchmark Completed.
 Class           =                        A
 Size            =                    14000
 Iterations      =                       15
 Time in seconds =                     3.02
 Total processes =                        1
 Compiled procs  =                        1
 Mop/s total     =                   495.93
 Mop/s/process   =                   495.93
 Operation type  =           floating point
 Verification    =               SUCCESSFUL
 Version         =                      3.2
 Compile date    =              11 Jun 2007


-bash%/export/home/vmarian/fortran/benchmarks/NPB3.2/NPB3.2-MPI/bin$
mpirun -np 2 cg.A.2
--------------------------------------------------------------------------
[0,1,0]: uDAPL on host SERVSOLARIS was unable to find
any NICs.
Another transport will be used instead, although this
may result in
lower performance.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
[0,1,1]: uDAPL on host SERVSOLARIS was unable to find
any NICs.
Another transport will be used instead, although this
may result in
lower performance.
--------------------------------------------------------------------------


 NAS Parallel Benchmarks 3.2 -- CG Benchmark

 Size:      14000
 Iterations:    15
 Number of active processes:     2
 Number of nonzeroes per row:       11
 Eigenvalue shift: .200E+02

 Benchmark completed
 VERIFICATION SUCCESSFUL
 Zeta is      0.171302350540E+02
 Error is     0.522633719989E-13


 CG Benchmark Completed.
 Class           =                        A
 Size            =                    14000
 Iterations      =                       15
 Time in seconds =                     2.47
 Total processes =                        2
 Compiled procs  =                        2
 Mop/s total     =                   606.32
 Mop/s/process   =                   303.16
 Operation type  =           floating point
 Verification    =               SUCCESSFUL
 Version         =                      3.2
 Compile date    =              11 Jun 2007


    You can remark that the scalling is not so good
like yours. Maibe I am having comunications problems
between processors.
   You can also remark that I am faster on one process
concared to your processor.

                                       Victor





--- Jeff Pummill <jpummil@uark.edu> wrote:

    
Perfect! Thanks Jeff!

The NAS Parallel Benchmark on a dual core AMD
machine now returns this...
[jpummil@localhost bin]$ mpirun -np 1 cg.A.1
NAS Parallel Benchmarks 3.2 -- CG Benchmark
CG Benchmark Completed.
 Class           =                        A
 Size            =                    14000
 Iterations      =                       15
 Time in seconds =                     4.75
 Total processes =                        1
 Compiled procs  =                        1
 Mop/s total     =                   315.32

...and...

[jpummil@localhost bin]$ mpirun -np 2 cg.A.2
NAS Parallel Benchmarks 3.2 -- CG Benchmark
 CG Benchmark Completed.
 Class           =                        A
 Size            =                    14000
 Iterations      =                       15
 Time in seconds =                     2.48
 Total processes =                        2
 Compiled procs  =                        2
 Mop/s total     =                   604.46

Not quite linear, but one must account for all of
the OS traffic that 
one core or the other must deal with.


Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.edu

"A supercomputer is a device for turning
compute-bound
problems into I/O-bound problems." -Seymour Cray


Jeff Squyres wrote:
      
Just remove the -L and -l arguments -- OMPI's
        
"mpif90" (and other  
      
wrapper compilers) will do all that magic for you.

Many -L/-l arguments in MPI application Makefiles
        
are throwbacks to  
      
older versions of MPICH wrapper compilers that
        
didn't always work  
      
properly.  Those days are long gone; most (all?)
        
MPI wrapper  
      
compilers do not need you to specify -L/-l these
        
days.
      

On Jun 10, 2007, at 3:08 PM, Jeff Pummill wrote:

  
        
Maybe the "dumb question" of the week, but here
          
goes...
      
I am trying to compile a piece of code (NPB)
          
under OpenMPI and I am  
      
having a problem with specifying the right
          
library. Possibly  
      
something I need to define in a LD_LIBRARY_PATH
          
statement?
      
Using Gnu mpich, the line looked like this...

FMPI_LIB  = -L/opt/mpich/gnu/lib/ -lmpich

I tried to replace this with...

FMPI_LIB  = -L/usr/lib/openmpi/ -llibmpi

to which the make responded...
mpif90 -O -o ../bin/cg.A.2 cg.o
          
../common/print_results.o ../common/ 
      
randdp.o ../common/timers.o -L/usr/lib/openmpi/
          
-llibmpi
      
/usr/bin/ld: cannot find -llibmpi
collect2: ld returned 1 exit status

Wrong library file? Setup or path issue?

-- 
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas

_______________________________________________
users mailing list
users@open-mpi.org

          
http://www.open-mpi.org/mailman/listinfo.cgi/users
      
    
          
  
_______________________________________________
        
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
      

 
____________________________________________________________________________________
Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL
_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
    

_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users