Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Accessing to the send buffer
From: Terry Dontje (terry.dontje_at_[hidden])
Date: 2010-08-02 11:45:47


For OMPI I believe reading the data buffer given to a posted send will
not cause any problems.

Anyone on the list care to disagree?

--td

Alberto Canestrelli wrote:
> Thanks,
> ok that is not my problem I never read a data from the posted receive
> before the correspondent WAIT. Now the last question is: what could
> happen if I am reading the data from the posted send? I do it plenty
> of times! possible consequences?Can you guarantee me that this
> approach is safe?
> thank you very much
> Alberto
>
> Il 02/08/2010 11.29, Alberto Canestrelli ha scritto:
>> In the posted irecv case if you are reading from the posted receive
>> buffer the problem is you may get one of three values:
>>
>> 1. pre irecv value
>> 2. value received from the irecv in progress
>> 3. possibly garbage if you are unlucky enough to access memory that is
>> at the same time being updated.
>>
>> --td
>> Alberto Canestrelli wrote:
>>> Thanks,
>>> it was late in the night yesterday and i highlighted STORES but I
>>> meanted to highlight LOADS! I know that
>>> stores are not allowed when you are doing non blocking send-recv. But
>>> I was impressed about LOADS case. I always do some loads of the data
>>> between all my ISEND-IRECVs and my WAITs. Could you please confirm me
>>> that OMPI can handle the LOAD case? And if it cannot handle it, which
>>> could be the consequence? What could happen in the worst of the case
>>> when there is a data race in reading a data?
>>> thanks
>>> alberto
>>>
>>> Il 02/08/2010 9.32, Alberto Canestrelli ha scritto:
>>> > I believe it is definitely a no-no to STORE (write) into a send
>>> buffer
>>> > while a send is posted. I know there have been debate in the forum to
>>> > relax LOADS (reads) from a send buffer. I think OMPI can handle the
>>> > latter case (LOADS). On the posted receive side you open yourself up
>>> > for some race conditions and overwrites if you do STORES or LOADS
>>> from a
>>> > posted receive buffer.
>>> >
>>> > --td
>>> >
>>> > Alberto Canestrelli wrote:
>>> >> Hi,
>>> >> I have a problem with a fortran code that I have parallelized with
>>> >> MPI. I state in advance that I read the whole ebook "Mit Press -
>>> Mpi -
>>> >> The Complete Reference, Volume 1" and I took different MPI
>>> classes, so
>>> >> I have a discrete MPI knowledge. I was able to solve by myself
>>> all the
>>> >> errors I bumped into but now I am not able to find the bug of my
>>> code
>>> >> that provides erroneous results. Without entering in the details
>>> of my
>>> >> code, I think that the cause of the problem could be reletad to the
>>> >> following aspect highlighted in the above ebook (in the follow I
>>> copy
>>> >> and paste from the e-book):
>>> >>
>>> >> A nonblocking post-send call indicates that the system may start
>>> >> copying data
>>> >> out of the send buffer. The sender must not access any part of the
>>> >> send buffer
>>> >> (neither for loads nor for STORES) after a nonblocking send
>>> operation
>>> >> is posted until
>>> >> the complete send returns.
>>> >> A nonblocking post-receive indicates that the system may start
>>> writing
>>> >> data into
>>> >> the receive buffer. The receiver must not access any part of the
>>> >> receive buffer after
>>> >> a nonblocking receive operation is posted, until the
>>> complete-receive
>>> >> returns.
>>> >> Rationale. We prohibit read accesses to a send buffer while it is
>>> >> being used, even
>>> >> though the send operation is not supposed to alter the content of
>>> this
>>> >> buffer. This
>>> >> may seem more stringent than necessary, but the additional
>>> restriction
>>> >> causes little
>>> >> loss of functionality and allows better performance on some systems-
>>> >> consider
>>> >> the case where data transfer is done by a DMA engine that is not
>>> >> cache-coherent
>>> >> with the main processor.End of rationale.
>>> >>
>>> >> I use plenty of nonblocking post-send in my code. Is it really true
>>> >> that the sender must not access any part of the send buffer not even
>>> >> for STORES? Or was it a MPI 1.0 issue?
>>> >> Thanks.
>>> >> alberto
>>
>

-- 
Oracle
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.650.633.7054
Oracle * - Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email terry.dontje_at_[hidden] <mailto:terry.dontje_at_[hidden]>



picture