Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] strange IMB runs
From: George Bosilca (bosilca_at_[hidden])
Date: 2009-07-30 10:08:46


The leave pinned will not help in this context. It can only help for
devices capable of real RMA operations and that require pinned memory,
which unfortunately is not the case for TCP. What is [really] strange
about your results is that you get a 4 times better bandwidth over TCP
than over shared memory. Over TCP there are 2 extra memory copies
(compared with sm) plus a bunch of syscalls, so there is absolutely no
reason to get better performance.

The Open MPI version is something you compiled or it came installed
with the OS? If you compiled it can you please provide us the
configure line?

   Thanks,
     george.

On Jul 29, 2009, at 13:55 , Dorian Krause wrote:

> Hi,
>
> --mca mpi_leave_pinned 1
>
> might help. Take a look at the FAQ for various tuning parameters.
>
>
> Michael Di Domenico wrote:
>> I'm not sure I understand what's actually happened here. I'm running
>> IMB on an HP superdome, just comparing the PingPong benchmark
>>
>> HP-MPI v2.3
>> Max ~ 700-800MB/sec
>>
>> OpenMPI v1.3
>> -mca btl self,sm - Max ~ 125-150MB/sec
>> -mca btl self,tcp - Max ~ 500-550MB/sec
>>
>> Is this behavior expected? Are there any tunables to get the OpenMPI
>> sockets up near HP-MPI?
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users