Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX
From: Scott Atchley (atchley_at_[hidden])
Date: 2009-05-06 13:31:26

On May 4, 2009, at 10:54 AM, Ricardo Fernández-Perea wrote:

> I finally have opportunity to run the imb-3.2 benchmark over myrinet
> I am running in a cluster of 16 node Xservers connected with myrinet
> 15 of them are 8core ones and the last one is a 4 cores one. Having
> a limit of 124 process
> I have run the test with the bynode option so from the 2 to the 16
> process test is always running 1 process by node.
> the following test pingpong, pingping, sendrecv, exchange presents
> a strong drop in performance with the 64k packet size.
> any idea where I should look for the cause.
> Ricardo

Hi Ricardo,

I believe that the pingpong results show the drop that you are
experiencing. There is a drop at 64 KB and 128 KB and it returns to
the same level at 128 KB.

What you are seeing in the pingpong results is the change over from
eager to rendezvous within MX. Up to 32 KB, we use an eager protocol
(send the data even if there is not a posted receive). After 32 KB, we
switch to a rendezvous protocol.

I do not believe that this limit can be changed. Have you tried the
same application when using the MX BTL?