Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] very bad parallel scaling of vasp using openmpi
From: Craig Plaisance (cpp6f_at_[hidden])
Date: 2009-08-18 13:34:04


I ran a test of tcp using NetPIPE and got throughput of 850 Mb/s at
message sizes of 128 Kb. The latency was 50 us. At message sizes above
1000 Kb, the throughput oscillated wildly between 850 Mb/s and values as
low as 200 Mb/s. This test was done with no other network traffic. I
then ran four tests simultaneously between different pairs of compute
nodes and saw a drastic decrease in performance. The highest stable
(non-oscillating) throughput was about 500 Mb/s at a message size of 16
Kb. The throughput then oscillated wildly, with the maximum value
climbing to 850 Mb/s at a message size greater than 128 Kb and dropping
to values as low as 100 Mb/s. The code I am using (VASP) has 100 to
1000 double complex (16 byte) arrays containing 100,000 to 1,000,000
elements each. Typically, the arrays are distributed among the nodes.
The most communication intensive part involves executing an MPI_alltoall
to redistribute the arrays so that node i contains the ith block of each
array. The default message size is 1000 elements (128 Kb), so according
to the NetPIPE test, I should be getting very good throughput when there
is no other network traffic. I will run a NetPIPE test with openmpi and
mpich2 now and post the results. So, does anyone know what causes the
wild oscillations in the throughput at larger message sizes and higher
network traffic? Thanks!