Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] send and receive buffer the same on root
From: Tom Rosmond (rosmond_at_[hidden])
Date: 2010-09-16 17:00:59


The responsible programmer for this code has conceded the point and we
will be replacing all offending examples with the MPI_IN_PLACE solution.
Thanks for the input.

T. Rosmond

On Thu, 2010-09-16 at 13:56 -0700, Tim Prince wrote:
> On 9/16/2010 9:58 AM, David Zhang wrote:
> > It's compiler specific I think. I've done this with OpenMPI no
> > problem, however on one another cluster with ifort I've gotten error
> > messages about not using MPI_IN_PLACE. So I think if it compiles,
> > it should work fine.
> >
> > On Thu, Sep 16, 2010 at 10:01 AM, Tom Rosmond <rosmond_at_[hidden]>
> > wrote:
> > I am working with a Fortran 90 code with many MPI calls like
> > this:
> >
> > call mpi_gatherv(x,nsize(rank+1),
> >
> > mpi_real,x,nsize,nstep,mpi_real,root,mpi_comm_world,mstat)
> >
> Compiler can't affect what happens here (unless maybe you use x again
> somewhere). Maybe you mean MPI library? Intel MPI probably checks
> this at run time and issues an error.
> I've dealt with run-time errors (which surfaced along with an ifort
> upgrade) which caused silent failure (incorrect numerics) on openmpi
> but a fatal diagnostic from Intel MPI run-time, due to multiple uses
> of the same buffer. Moral: even if it works for you now with
> openmpi, you could be setting up for unexpected failure in the future.
> --
> Tim Prince
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users