Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Brian Barrett (bbarrett_at_[hidden])
Date: 2007-05-17 12:20:35


On the other hand, since the MPI standard explicitly says you're not
allowed to call fork() or system() during the MPI application and
sense the network should really cope with this in some way, if it
further complicates the code *at all*, I'm strongly against it.
Especially since it won't really solve the problem. For example,
with one-sided, I'm not going to go out of my way to send the first
and last bit of the buffer so the user can touch those pages while
calling fork.

Also, if I understand the leave_pinned protocol, this still won't
really solve anything for the general case -- leave pinned won't send
any data eagerly if the buffer is already pinned, so there are still
going to be situations where the user can cause problems. Now we
have a situation where sometimes it works and sometimes it doesn't
and we pretend to support fork()/system() in certain cases. Seems
like actually fixing the problem the "right way" would be the right
path forward...

Brian

On May 17, 2007, at 10:10 AM, Jeff Squyres wrote:

> Moving to devel; this question seems worthwhile to push out to the
> general development community.
>
> I've been coming across an increasing number of customers and other
> random OMPI users who use system(). So if there's zero impact on
> performance and it doesn't make the code [more] incredibly horrible
> [than it already is], I'm in favor of this change.
>
>
>
> On May 17, 2007, at 7:00 AM, Gleb Natapov wrote:
>
>> Hi,
>>
>> I thought about changing pipeline protocol to send data from the
>> end of
>> the message instead of the middle like it does now. The rationale
>> behind
>> this is better fork() support. When application forks, child doesn't
>> inherit registered memory, so IB providers educate users to not touch
>> buffers that were owned by the MPI before fork in a child process.
>> The
>> problem is that granularity of registration is HW page (4K), so last
>> page of the buffer may contain also other application's data and user
>> may be unaware of this and be very surprised by SIGSEGV. If pipeline
>> protocol will send data from the end of a buffer then the last
>> page of
>> the buffer will not be registered (and first page is never registered
>> because we send beginning of the buffer eagerly with rendezvous
>> packet)
>> so this situation will be avoided. It should have zero impact on
>> performance. What do you think? How common for MPI applications to
>> fork()?
>>
>> --
>> Gleb.
>> _______________________________________________
>> devel-core mailing list
>> devel-core_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel-core
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel