Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] Pallas fails
From: Pavel Shamis (Pasha) (pasha_at_[hidden])
Date: 2008-06-04 03:20:39


Last conf. call Jeff mentioned that he see some collectives failures.
In my MTT testing I also see that Pallas collectives failed - http://www.open-mpi.org/mtt/index.php?do_redir=682

 Alltoall

#----------------------------------------------------------------
# Benchmarking Alltoall
# #processes = 20
#----------------------------------------------------------------
       #bytes #repetitions t_min[usec] t_max[usec] t_avg[usec]
            0 1000 0.03 0.05 0.04
            1 1000 179.15 179.22 179.18
            2 1000 155.96 156.02 155.98
            4 1000 156.93 156.98 156.95
            8 1000 163.63 163.67 163.65
           16 1000 115.04 115.08 115.07
           32 1000 123.57 123.62 123.59
           64 1000 129.78 129.82 129.80
          128 1000 141.45 141.49 141.48
          256 1000 960.11 960.24 960.20
          512 1000 900.95 901.11 901.04
         1024 1000 921.95 922.05 922.00
         2048 1000 862.50 862.72 862.60
         4096 1000 1044.90 1044.95 1044.92
         8192 1000 1458.59 1458.77 1458.69
*** An error occurred in MPI_Alltoall
*** on communicator MPI COMMUNICATOR 4 SPLIT FROM 0
*** An error occurred in MPI_Alltoall
*** on communicator MPI COMMUNICATOR 4 SPLIT FROM 0