Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX
From: Ricardo Fernández-Perea (rfernandezperea_at_[hidden])
Date: 2009-03-20 13:32:34


It is the F-2M but I think for inter-node communication should be
equivalents.
I have not run and MPI pingpong benchmark yet.

The truth is I have a 10 days travel coming next week and I thought I can
take some optimization "light reading" with me.

so I know what I must look for when I came back.

Ricardo

On Fri, Mar 20, 2009 at 5:10 PM, Scott Atchley <atchley_at_[hidden]> wrote:

> On Mar 20, 2009, at 11:33 AM, Ricardo Fernández-Perea wrote:
>
> This are the results initially
>> Running 1000 iterations.
>> Length Latency(us) Bandwidth(MB/s)
>> 0 2.738 0.000
>> 1 2.718 0.368
>> 2 2.707 0.739
>> <snip>
>> 1048576 4392.217 238.735
>> 2097152 8705.028 240.913
>> 4194304 17359.166 241.619
>>
>> with export MX_RCACHE=1
>>
>> Running 1000 iterations.
>> Length Latency(us) Bandwidth(MB/s)
>> 0 2.731 0.000
>> 1 2.705 0.370
>> 2 2.719 0.736
>> <snip>
>> 1048576 4265.846 245.807
>> 2097152 8491.122 246.982
>> 4194304 16953.997 247.393
>>
>
> Ricardo,
>
> I am assuming that these are PCI-X NICs. Given the latency and bandwidth,
> are these "D" model NICs (see the top of the mx_info output)? If so, that
> looks about as good as you can expect.
>
> Have you run Intel MPI Benchmark (IMB) or another MPI pingpong type
> benchmark?
>
> Scott
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>