Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Accessing to the send buffer
From: Alberto Canestrelli (canestrelli_at_[hidden])
Date: 2010-08-18 12:08:40


At: Richard Treumann . You said "The Forum has decided the send buffer
rule is to restrictive. " Do you mean that you are planning to change
the rule?
At: Terry Frankcombe. If they are going to change the rule everything
will be fine. Do you know why I don't you want to be standard-compliant?
Since it is a pain to double all the variables that I send just because
I am reading them later on! I have to change most of my MPI code.
thanks
alberto

Il 18/08/2010 11.56, Alberto Canestrelli ha scritto:
>
> On Mon, 2010-08-02 at 11:36 -0400, Alberto Canestrelli wrote:
> > Thanks,
> > ok that is not my problem I never read a data from the posted receive
> > before the correspondent WAIT. Now the last question is: what could
> > happen if I am reading the data from the posted send? I do it plenty of
> > times! possible consequences?Can you guarantee me that this approach is
> > safe?
>
> Well, it seems from what you've posted that the standard says you should
> not assume it's safe. Don't you want to be standard-compliant?
>
> >
> > Il 02/08/2010 11.29, Alberto Canestrelli ha scritto:
> > > In the posted irecv case if you are reading from the posted receive
> > > buffer the problem is you may get one of three values:
> > >
> > > 1. pre irecv value
> > > 2. value received from the irecv in progress
> > > 3. possibly garbage if you are unlucky enough to access memory that is
> > > at the same time being updated.
> > >
> > > --td
> > > Alberto Canestrelli wrote:
> > >> Thanks,
> > >> it was late in the night yesterday and i highlighted STORES but I
> > >> meanted to highlight LOADS! I know that
> > >> stores are not allowed when you are doing non blocking send-recv. But
> > >> I was impressed about LOADS case. I always do some loads of the data
> > >> between all my ISEND-IRECVs and my WAITs. Could you please confirm me
> > >> that OMPI can handle the LOAD case? And if it cannot handle it, which
> > >> could be the consequence? What could happen in the worst of the case
> > >> when there is a data race in reading a data?
> > >> thanks
> > >> alberto
> > >>
> > >> Il 02/08/2010 9.32, Alberto Canestrelli ha scritto:
> > >> > I believe it is definitely a no-no to STORE (write) into a send
> buffer
> > >> > while a send is posted. I know there have been debate in the
> forum to
> > >> > relax LOADS (reads) from a send buffer. I think OMPI can handle the
> > >> > latter case (LOADS). On the posted receive side you open
> yourself up
> > >> > for some race conditions and overwrites if you do STORES or LOADS
> > >> from a
> > >> > posted receive buffer.
> > >> >
> > >> > --td
> > >> >
> > >> > Alberto Canestrelli wrote:
> > >> >> Hi,
> > >> >> I have a problem with a fortran code that I have parallelized with
> > >> >> MPI. I state in advance that I read the whole ebook "Mit Press -
> > >> Mpi -
> > >> >> The Complete Reference, Volume 1" and I took different MPI
> > >> classes, so
> > >> >> I have a discrete MPI knowledge. I was able to solve by myself all
> > >> the
> > >> >> errors I bumped into but now I am not able to find the bug of
> my code
> > >> >> that provides erroneous results. Without entering in the details
> > >> of my
> > >> >> code, I think that the cause of the problem could be reletad to
> the
> > >> >> following aspect highlighted in the above ebook (in the follow
> I copy
> > >> >> and paste from the e-book):
> > >> >>
> > >> >> A nonblocking post-send call indicates that the system may start
> > >> >> copying data
> > >> >> out of the send buffer. The sender must not access any part of the
> > >> >> send buffer
> > >> >> (neither for loads nor for STORES) after a nonblocking send
> operation
> > >> >> is posted until
> > >> >> the complete send returns.
> > >> >> A nonblocking post-receive indicates that the system may start
> > >> writing
> > >> >> data into
> > >> >> the receive buffer. The receiver must not access any part of the
> > >> >> receive buffer after
> > >> >> a nonblocking receive operation is posted, until the
> complete-receive
> > >> >> returns.
> > >> >> Rationale. We prohibit read accesses to a send buffer while it is
> > >> >> being used, even
> > >> >> though the send operation is not supposed to alter the content of
> > >> this
> > >> >> buffer. This
> > >> >> may seem more stringent than necessary, but the additional
> > >> restriction
> > >> >> causes little
> > >> >> loss of functionality and allows better performance on some
> systems-
> > >> >> consider
> > >> >> the case where data transfer is done by a DMA engine that is not
> > >> >> cache-coherent
> > >> >> with the main processor.End of rationale.
> > >> >>
> > >> >> I use plenty of nonblocking post-send in my code. Is it really
> true
> > >> >> that the sender must not access any part of the send buffer not
> even
> > >> >> for STORES? Or was it a MPI 1.0 issue?
> > >> >> Thanks.
> > >> >> alberto
> > >
> >
> > --
> > ******************************************************
> > Ing. Alberto Canestrelli
> > Università degli Studi di Padova,
> > Dipartimento di Ingegneria Idraulica, Marittima,
> > Ambientale e Geotecnica,
> > via Loredan 20, 35131 PADOVA (ITALY)
> > phone: +39 0498275438 begin_of_the_skype_highlighting +39
> 0498275438 end_of_the_skype_highlighting
> > fax: +39 0498275446
> > mail: canestrelli_at_[hidden]
> >
> > *******************************************************
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
******************************************************
Ing. Alberto Canestrelli
Università degli Studi di Padova,
Dipartimento di Ingegneria Idraulica, Marittima,
Ambientale e Geotecnica,
via Loredan 20, 35131 PADOVA (ITALY)
phone: +39 0498275438
fax:  +39 0498275446
mail:  canestrelli_at_[hidden]
*******************************************************