Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Galen M. Shipman (gshipman_at_[hidden])
Date: 2006-03-16 12:56:11


Hi Jean,

Take a look here: http://www.open-mpi.org/faq/?category=infiniband#ib-
leave-pinned

This should improve performance for micro-benchmarks and some
applications.

Please let mw know if this doesn't solve the issue.

Thanks,
Galen
On Mar 16, 2006, at 10:34 AM, Jean Latour wrote:

> Hello,
>
> Testing performance of OpenMPI over Infiniband I have the following
> result :
>
> 1) Hardware is : SilversStorm interface
>
> 2) Openmpi version is : (from ompi_info)
> Open MPI: 1.0.2a9r9159
> Open MPI SVN revision: r9159
> Open RTE: 1.0.2a9r9159
> Open RTE SVN revision: r9159
> OPAL: 1.0.2a9r9159
> OPAL SVN revision: r9159
>
> 3) Cluster with Bi-processors Opteron 248 2.2 GHz
>
> Configure has been run with option --with-mvapi=path-to-mvapi
>
> 4) a C coded pinpong gives the following values :
>
> LOOPS: 1000 BYTES: 4096 SECONDS: 0.085557 MBytes/sec: 95.749051
> LOOPS: 1000 BYTES: 8192 SECONDS: 0.050657 MBytes/sec: 323.429912
> LOOPS: 1000 BYTES: 16384 SECONDS: 0.084038 MBytes/sec: 389.918757
> LOOPS: 1000 BYTES: 32768 SECONDS: 0.163161 MBytes/sec: 401.665104
> LOOPS: 1000 BYTES: 65536 SECONDS: 0.306694 MBytes/sec: 427.370561
> LOOPS: 1000 BYTES: 131072 SECONDS: 0.529589 MBytes/sec: 494.995011
> LOOPS: 1000 BYTES: 262144 SECONDS: 0.952616 MBytes/sec: 550.366583
> LOOPS: 1000 BYTES: 524288 SECONDS: 1.927987 MBytes/sec: 543.870859
> LOOPS: 1000 BYTES: 1048576 SECONDS: 3.673732 MBytes/sec: 570.850562
> LOOPS: 1000 BYTES: 2097152 SECONDS: 9.993185 MBytes/sec: 419.716435
> LOOPS: 1000 BYTES: 4194304 SECONDS: 18.211958 MBytes/sec: 460.609893
> LOOPS: 1000 BYTES: 8388608 SECONDS: 35.421490 MBytes/sec: 473.645124
>
> My questions are :
> a) Is OpenMPI doing in this case TCP/IP over IB ? (I guess so)
> b) Is it possible to improve significantly these values by changing
> the defaults ?
>
> I have used several mca btl parameters but without improving the
> maximum bandwith.
> For example : --mca btl mvapi --mca btl_mvapi_max_send_size
> 8388608
>
> c) Is it possible that other IB hardware implementations have better
> performances with OpenMPI ?
>
> d) Is it possible to use specific IB drivers for optimal
> performance ? (should reach almost 800 MB/sec)
>
> Thank you very much for your help
> Best Regards,
> Jean Latour
>
> <latour.vcf>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users