Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] strange IMB runs
From: Michael Di Domenico (mdidomenico4_at_[hidden])
Date: 2009-08-12 11:40:48


So pushing this along a little more

running with openmpi-1.3 svn rev 20295

mpirun -np 2
  -mca btl sm,self
  -mca mpi_paffinity_alone 1
  -mca mpi_leave_pinned 1
  -mca btl_sm_eager_limit 8192
$PWD/IMB-MPI1 pingpong

Yields ~390MB/sec

So we're getting there, but still only about half speed

On Thu, Aug 6, 2009 at 9:30 AM, Michael Di
Domenico<mdidomenico4_at_[hidden]> wrote:
> Here's an interesting data point. I installed the RHEL rpm version of
> OpenMPI 1.2.7-6 for ia64
>
> mpirun -np 2 -mca btl self,sm -mca mpi_paffinity_alone 1 -mca
> mpi_leave_pinned 1 $PWD/IMB-MPI1 pingpong
>
> With v1.3 and -mca btl self,sm i get ~150MB/sec
> With v1.3 and -mca btl self,tcp i get ~550MB/sec
>
> With v1.2.7-6 and -mca btl self,sm i get ~225MB/sec
> With v1.2.7-6 and -mca btl self,tcp i get ~650MB/sec
>
>
> On Fri, Jul 31, 2009 at 10:42 AM, Edgar Gabriel<gabriel_at_[hidden]> wrote:
>> Michael Di Domenico wrote:
>>>
>>> mpi_leave_pinned didn't help still at ~145MB/sec
>>> btl_sm_eager_limit from 4096 to 8192 pushes me upto ~212MB/sec, but
>>> pushing it past that doesn't change it anymore
>>>
>>> Are there any intelligent programs that can go through and test all
>>> the different permutations of tunables for openmpi? Outside of me
>>> just writing an ugly looping script...
>>
>> actually there is,
>>
>> http://svn.open-mpi.org/svn/otpo/trunk/
>>
>> this tool has been used to tune openib parameter, and I would guess that it
>> could be used without any modification to also run netpipe over sm...
>>
>> Thanks
>> Edgar
>>>
>>> On Wed, Jul 29, 2009 at 1:55 PM, Dorian Krause<doriankrause_at_[hidden]> wrote:
>>>>
>>>> Hi,
>>>>
>>>> --mca mpi_leave_pinned 1
>>>>
>>>> might help. Take a look at the FAQ for various tuning parameters.
>>>>
>>>>
>>>> Michael Di Domenico wrote:
>>>>>
>>>>> I'm not sure I understand what's actually happened here. I'm running
>>>>> IMB on an HP superdome, just comparing the PingPong benchmark
>>>>>
>>>>> HP-MPI v2.3
>>>>> Max ~ 700-800MB/sec
>>>>>
>>>>> OpenMPI v1.3
>>>>> -mca btl self,sm - Max ~ 125-150MB/sec
>>>>> -mca btl self,tcp - Max ~ 500-550MB/sec
>>>>>
>>>>> Is this behavior expected? Are there any tunables to get the OpenMPI
>>>>> sockets up near HP-MPI?
>