Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] very bad parallel scaling of vasp using openmpi
From: Gus Correa (gus_at_[hidden])
Date: 2009-08-18 18:43:15


Hi Craig, list

Independent of any issues with your GigE switch,
which you may need to address,
you may want to take a look at the performance of the default
OpenMPI MPI_Alltoall algorithm, which you say is a cornerstone of VASP.
You can perhaps try alternative algorithms for different message
sizes, using OpenMPI tuned collectives.

Please, see this long thread from last May,
where it was reported that the CPMD code (seems to be another
molecular dynamics code, like VASP, right?),
which also uses MPI_Alltoall,
didn't perform well for not-so-large messages,
and the scaling was poor.
I suppose your messages also get smaller
when you increase the number of processors,
assuming the problem size is kept constant, right?
The thread suggests diagnostics and solutions,
and I found it quite helpful:

http://www.open-mpi.org/community/lists/users/2009/05/9355.php

Sorry, we're not computational chemists here,
but our programs also use MPI collectives.

Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------

Craig Plaisance wrote:
> I ran a test of tcp using NetPIPE and got throughput of 850 Mb/s at
> message sizes of 128 Kb. The latency was 50 us. At message sizes above
> 1000 Kb, the throughput oscillated wildly between 850 Mb/s and values as
> low as 200 Mb/s. This test was done with no other network traffic. I
> then ran four tests simultaneously between different pairs of compute
> nodes and saw a drastic decrease in performance. The highest stable
> (non-oscillating) throughput was about 500 Mb/s at a message size of 16
> Kb. The throughput then oscillated wildly, with the maximum value
> climbing to 850 Mb/s at a message size greater than 128 Kb and dropping
> to values as low as 100 Mb/s. The code I am using (VASP) has 100 to
> 1000 double complex (16 byte) arrays containing 100,000 to 1,000,000
> elements each. Typically, the arrays are distributed among the nodes.
> The most communication intensive part involves executing an MPI_alltoall
> to redistribute the arrays so that node i contains the ith block of each
> array. The default message size is 1000 elements (128 Kb), so according
> to the NetPIPE test, I should be getting very good throughput when there
> is no other network traffic. I will run a NetPIPE test with openmpi and
> mpich2 now and post the results. So, does anyone know what causes the
> wild oscillations in the throughput at larger message sizes and higher
> network traffic? Thanks!
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users