This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Mon, 2010-08-02 at 11:36 -0400, Alberto Canestrelli wrote:
> ok that is not my problem I never read a data from the posted receive
> before the correspondent WAIT. Now the last question is: what could
> happen if I am reading the data from the posted send? I do it plenty of
> times! possible consequences?Can you guarantee me that this approach is
Well, it seems from what you've posted that the standard says you should
not assume it's safe. Don't you want to be standard-compliant?
> Il 02/08/2010 11.29, Alberto Canestrelli ha scritto:
> > In the posted irecv case if you are reading from the posted receive
> > buffer the problem is you may get one of three values:
> > 1. pre irecv value
> > 2. value received from the irecv in progress
> > 3. possibly garbage if you are unlucky enough to access memory that is
> > at the same time being updated.
> > --td
> > Alberto Canestrelli wrote:
> >> Thanks,
> >> it was late in the night yesterday and i highlighted STORES but I
> >> meanted to highlight LOADS! I know that
> >> stores are not allowed when you are doing non blocking send-recv. But
> >> I was impressed about LOADS case. I always do some loads of the data
> >> between all my ISEND-IRECVs and my WAITs. Could you please confirm me
> >> that OMPI can handle the LOAD case? And if it cannot handle it, which
> >> could be the consequence? What could happen in the worst of the case
> >> when there is a data race in reading a data?
> >> thanks
> >> alberto
> >> Il 02/08/2010 9.32, Alberto Canestrelli ha scritto:
> >> > I believe it is definitely a no-no to STORE (write) into a send buffer
> >> > while a send is posted. I know there have been debate in the forum to
> >> > relax LOADS (reads) from a send buffer. I think OMPI can handle the
> >> > latter case (LOADS). On the posted receive side you open yourself up
> >> > for some race conditions and overwrites if you do STORES or LOADS
> >> from a
> >> > posted receive buffer.
> >> >
> >> > --td
> >> >
> >> > Alberto Canestrelli wrote:
> >> >> Hi,
> >> >> I have a problem with a fortran code that I have parallelized with
> >> >> MPI. I state in advance that I read the whole ebook "Mit Press -
> >> Mpi -
> >> >> The Complete Reference, Volume 1" and I took different MPI
> >> classes, so
> >> >> I have a discrete MPI knowledge. I was able to solve by myself all
> >> the
> >> >> errors I bumped into but now I am not able to find the bug of my code
> >> >> that provides erroneous results. Without entering in the details
> >> of my
> >> >> code, I think that the cause of the problem could be reletad to the
> >> >> following aspect highlighted in the above ebook (in the follow I copy
> >> >> and paste from the e-book):
> >> >>
> >> >> A nonblocking post-send call indicates that the system may start
> >> >> copying data
> >> >> out of the send buffer. The sender must not access any part of the
> >> >> send buffer
> >> >> (neither for loads nor for STORES) after a nonblocking send operation
> >> >> is posted until
> >> >> the complete send returns.
> >> >> A nonblocking post-receive indicates that the system may start
> >> writing
> >> >> data into
> >> >> the receive buffer. The receiver must not access any part of the
> >> >> receive buffer after
> >> >> a nonblocking receive operation is posted, until the complete-receive
> >> >> returns.
> >> >> Rationale. We prohibit read accesses to a send buffer while it is
> >> >> being used, even
> >> >> though the send operation is not supposed to alter the content of
> >> this
> >> >> buffer. This
> >> >> may seem more stringent than necessary, but the additional
> >> restriction
> >> >> causes little
> >> >> loss of functionality and allows better performance on some systems-
> >> >> consider
> >> >> the case where data transfer is done by a DMA engine that is not
> >> >> cache-coherent
> >> >> with the main processor.End of rationale.
> >> >>
> >> >> I use plenty of nonblocking post-send in my code. Is it really true
> >> >> that the sender must not access any part of the send buffer not even
> >> >> for STORES? Or was it a MPI 1.0 issue?
> >> >> Thanks.
> >> >> alberto
> Ing. Alberto Canestrelli
> UniversitÃ degli Studi di Padova,
> Dipartimento di Ingegneria Idraulica, Marittima,
> Ambientale e Geotecnica,
> via Loredan 20, 35131 PADOVA (ITALY)
> phone: +39 0498275438
> fax: +39 0498275446
> mail: canestrelli_at_[hidden]
> users mailing list