Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] SpMV Benchmarks
From: Paul Monday (Parallel Scientific) (paul.monday_at_[hidden])
Date: 2011-05-06 11:42:11


Thank you Jed, sounds like the log_summary should be sufficient for my
needs!

I appreciate your help :)

Have a great weekend!

Paul Monday

On 5/6/11 3:38 AM, Jed Brown wrote:
> On Thu, May 5, 2011 at 23:15, Paul Monday (Parallel Scientific)
> <paul.monday_at_[hidden] <mailto:paul.monday_at_[hidden]>> wrote:
>
> Hi, I'm hoping someone can help me locate a SpMV benchmark that
> runs w/ Open MPI so I can benchmark how my systems are interacting
> with the network as I add nodes / cores to the pool of systems. I
> can find SpMV benchmarks for single processor / OpenMP all over,
> but these networked ones are proving harder to come by. I located
> Lis (http://www.ssisc.org/lis/) but it seems more of a solver then
> a benchmarking program.
>
>
> I would suggest using PETSc. It is a solvers library rather than a
> contrived benchmark suite, but the examples give you access to many
> different matrices and you can use many different formats without
> changing the code. If you run with -log_summary, you will get a useful
> table showing the performance of different operations
> (time/balance/communication/reductions/flops/etc). Also note that SpMV
> is usually not an end in its own right, usually it is part of a
> preconditioned Krylov iteration, so the performance of all the pieces
> matter.
>
> If you are concerned with absolute performance then you should
> consider using petsc-dev since it tends to have better memory
> performance due to software prefetch. This is important for good reuse
> of high-level caches since otherwise the matrix entries flush out the
> useful stuff. It usually makes between a 20 and 30% improvement, a bit
> more for some symmetric and triangular kernels. Many of the sparse
> matrix kernels did not have software prefetch as of the 3.1 release.
> Remember:
>
> "The easiest way to make software scalable is to make it sequentially
> inefficient." (Gropp, 1999)
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users