Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] MPI_Reduce performance
From: Gabriele Fatigati (g.fatigati_at_[hidden])
Date: 2010-09-08 05:21:49

Dear OpenMPI users,

i'm using OpenMPI 1.3.3 on Infiniband 4x interconnnection network. My
parallel application use intensive MPI_Reduce communication over
communicator created with MPI_Comm_split.

I've noted strange behaviour during execution. My code is instrumented with
Scalasca 1.3 to report subroutine execution time. First execution shows
elapsed time with 128 processors ( job_communicator is created with
MPI_Comm_split). In both cases is composed to the same ranks of


The elapsed time is 2671 sec.

Second run use MPI_BARRIER before MPI_Reduce:


The elapsed time of Barrier+Reduce is 2167 sec, (about 8 minutes less).

So, im my opinion, it is better put MPI_Barrier before any MPI_Reduce to
mitigate "asynchronous" behaviour of MPI_Reduce in OpenMPI. I suspect the
same for others collective communications. Someone can explaine me why
MPI_reduce has this strange behaviour?

Thanks in advance.

Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy                    Tel:   +39 051 6171722
g.fatigati [AT]