Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] possible bug exercised by mpi4py
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2012-05-24 09:37:09

On May 24, 2012, at 9:28 AM, Jeff Squyres wrote:

> So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER all had the issue. Now fixed on the trunk, and will be in 1.6.1.

I forgot to mention -- this issue exists waaay back in the Open MPI code base. I spot-checked Open MPI 1.2.0 and see it there, too.

To be clear: this particular bug only shows itself when you invoke ALLGATHERV, GATHERV, SCATTERV, or REDUCE_SCATTER on an intercommunicator where the sizes of the two groups are unequal. Whether the problem shows itself or not is rather random (i.e., it depends on how "safe" the memory is after the recvcounts array). FWIW, you can workaround this bug by setting the MCA parameter "mpi_param_check" to 0, which disables all MPI function parameter checking. That may not be attractive in some cases, of course.

More specifically: since this problem has been in the OMPI code base for *years* (possibly since 1.0 -- but I'm not going to bother to check), it shows how little real-world applications actually use this specific functionality. Don't get me wrong -- I'm *very* thankful to the mpi4py community for raising this issue, and I'm glad to get it fixed! But it does show that there are dark, dusty corners in MPI functionality where few bother to tread. :-)

Jeff Squyres
For corporate legal information go to: