Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] failurewithzero-lengthReduce()andbothsbuf=rbuf=NULL
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-02-11 15:01:49


On Feb 11, 2010, at 2:14 PM, Lisandro Dalcin wrote:

> But you did not answer my previous question... What's the rationale
> for requiring sendbuf!=recvbuf when count=0? I would argue you want a
> free ticket :-) to put restrictions on user code (without an actual
> rationale) in order to simplify your implementation.

I don't understand your assertion. The MPI spec clearly says that sendbuf must != recvbuf. If you want the sendbuf to be the same as the recvbuf, MPI supports MPI_IN_PLACE for several operations.

I realize that's not what you're trying to do, but these are the semantics that MPI has defined.

> > While zero-length arrays/sequence/containers do appears in real code, they are not equal to NULL. If they are NULL, that means they do not contain any useful data, and they don't need to be source or target of any kind of [collective or point-to-point] communications.

And even stronger than this: remember that NULL *is* a valid pointer for MPI when it is paired with an appropriate datatype. As I said in an earlier mail, NULL is therefore not a special case buffer for sendbuf or recvbuf.

To be absolutely clear: none of OMPI's MPI API calls have checks of the form:

    if (NULL == choice_buffer)
        return error;

> Yes, I know. Moreover, I agree with you. NULL should be reserved for
> invalid pointers, not for zero-length array...

But it is not. MPI's datatype mechanism is so general that NULL is valid.

So yes, passing MPI_REDUCE(NULL, NULL, ...) violates the sendbuf!=recvbuf rule (partially because there is only one datatype in MPI_REDUCE). If a language may convert a buffer representation to NULL for you behind the scenes, then it's up to the language binding to catch/correct that.

...at least by the wording in today's MPI spec. That being said, your python example of buffers a and b unknowingly being transformed to NULL behind the scenes seems like a good thing that MPI should support better. It's exactly these kinds of issues that would be helpful to know / discuss / propose improvements for MPI-3.

Could we convince you to come to a Forum meeting? :-)

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/