Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Graham E Fagg (fagg_at_[hidden])
Date: 2006-02-23 15:03:40


Thanks Konstantin,
  we are in the process of adding 3 more alltoall variants plus two
additional alltoallv methods by midweek next week, so hopefully we can
find the method that works best for your network. (two of the new methods
do slightly different request management and communication ordering to
reduce 'stress' on interconnect switches.)

Thanks,
   Graham.
-----------------------------------------------------------------------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
Email: fagg_at_[hidden] | Phone:+1(865)974-5790 | Fax:+1(865)974-8296
Broken complex systems are always derived from working simple systems
-----------------------------------------------------------------------

  On Thu, 23 Feb 2006, Konstantin Kudin wrote:

> Hi all,
>
> I retested the very recent trunk with the skampi 4.1. The "alltoall"
> works quite nicely up to 7 dual Opterons, whereas bunch of
> isend+irecv's choke. There appear to be some "special" effects related
> to the 1Gbit setup we are using (problems with broadcom adapters?), and
> unless there is a clever "alltoall" scheme, things fall apart for large
> packets. Anyway, if "alltoall" gets pushed even further, that'd be very
> useful.
>
> What is the approximate time frame for officially releasing version
> 1.1 ? High performance "alltoall" will be of great use for a whole
> bunch of packages where the most challenging parallel part is
> distributed FFTs, which usually rely on "alltoall".
>
> Konstantin
>
> %%%%%
> openmpi-1.1a1r9108
>
> Columns: ncpu, average latency, std. deviation, ...
>
> message size: 16x4=64kbyte
>
> #/*@insyncol_MPI_Alltoall-nodes-long-SM.ski*/
> 2 275.1 1.6 8 275.1 1.6 8
> 3 1890.2 31.3 8 1890.2 31.3 8
> 4 3467.1 85.0 8 3467.1 85.0 8
> 5 5843.9 66.3 8 5843.9 66.3 8
> 6 8720.9 110.6 8 8720.9 110.6 8
> 7 9598.8 99.6 7 9598.8 99.6 7
> 8 11757.9 256.4 6 11757.9 256.4 6
> 9 13428.2 166.4 8 13428.2 166.4 8
> 10 14623.4 176.2 8 14623.4 176.2 8
> 11 16689.4 171.9 4 16689.4 171.9 4
> 12 18941.4 502.9 5 18941.4 502.9 5
> 13 20105.2 99.0 8 20105.2 99.0 8
> 14 22731.1 155.0 2 22731.1 155.0 2
> 15 123939.7 49248.4 8 123939.7 49248.4 8
> 16 142048.0 43888.8 8 142048.0 43888.8 8
>
> #/*@insyncol_MPI_Alltoall_Isend_Irecv-nodes-long-SM.ski*/
> 2 247.4 0.8 8 247.4 0.8 8
> 3 1861.8 10.1 8 1861.8 10.1 8
> 4 3158.4 24.5 8 3158.4 24.5 8
> 5 4270.0 75.0 2 4270.0 75.0 2
> 6 225351.5 12504.5 2 225351.5 12504.5 2
> 7 228399.5 14770.5 2 228399.5 14770.5 2
> 8 247087.5 14448.4 2 247087.5 14448.4 2
> 9 243806.7 3878.9 8 243806.7 3878.9 8
> 10 248353.0 6640.9 2 248353.0 6640.9 2
> 11 267541.5 5210.1 8 267541.5 5210.1 8
> 12 286600.1 1665.1 2 286600.1 1665.1 2
> 13 277546.5 4208.1 8 277546.5 4208.1 8
> 14 364208.9 98276.9 2 364208.9 98276.9 2
> 15 392139.0 101163.9 2 392139.0 101163.9 2
> 16 367182.1 97711.0 2 367182.1 97711.0 2
>
> #/*@insyncol_MPI_Alltoallv-nodes-long-SM.ski*/
> 2 279.8 1.0 8 279.8 1.0 8
> 3 1633.5 19.9 8 1633.5 19.9 8
> 4 2834.1 12.5 8 2834.1 12.5 8
> 5 4530.1 173.2 8 4530.1 173.2 8
> 6 147548.2 41749.4 8 147548.2 41749.4 8
> 7 248700.8 7621.9 8 248700.8 7621.9 8
> 8 261050.8 5618.6 8 261050.8 5618.6 8
> 9 256836.4 4441.5 8 256836.4 4441.5 8
> 10 274444.8 4019.0 8 274444.8 4019.0 8
> 11 275536.6 6338.7 8 275536.6 6338.7 8
> 12 282339.6 8411.8 8 282339.6 8411.8 8
> 13 303240.3 23093.0 8 303240.3 23093.0 8
> 14 399687.1 49633.4 8 399687.1 49633.4 8
> 15 359080.9 33852.4 8 359080.9 33852.4 8
> 16 337081.5 30155.5 8 337081.5 30155.5 8
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Thanks,
         Graham.
----------------------------------------------------------------------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
Email: fagg_at_[hidden] | Phone:+1(865)974-5790 | Fax:+1(865)974-8296
Broken complex systems are always derived from working simple systems
----------------------------------------------------------------------