Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Tim S. Woodall (twoodall_at_[hidden])
Date: 2005-10-31 11:12:10


Hello Mike,

Mike Houston wrote:
> When only sending a few messages, we get reasonably good IB performance,
> ~500MB/s (MVAPICH is 850MB/s). However, if I crank the number of
> messages up, we drop to 3MB/s(!!!). This is with the OSU NBCL
> mpi_bandwidth test. We are running Mellanox IB Gold 1.8 with 3.3.3
> firmware on PCI-X (Couger) boards. Everything works with MVAPICH, but
> we really need the thread support in OpenMPI.
>
> Ideas? I noticed there are a plethora of runtime options configurable
> for mvapi. Do I need to tweak these to get performacne up?
>

You might try running w/ the:

mpirun -mca mpi_leave_pinned 1

Which will cause mvapi port to maintain an mru cache of registrations,
rather than dynamically pinning/unpinning memory.

If this does not resolve the BW problems, try increasing the
resources allocated to each connection:

-mca btl_mvapi_rd_min 128
-mca btl_mvapi_rd_max 256

Also can you forward me a copy of the test code or a reference to it?

Thanks,
Tim