Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] System V Shared Memory for Open MPI: Request for Community Input and Testing
From: Samuel K. Gutierrez (samuel_at_[hidden])
Date: 2010-05-03 10:55:01

Hi all,

Does anyone know of a relatively portable solution for querying a
given system for the shmctl behavior that I am relying on, or is this
going to be a nightmare? Because, if I am reading this thread
correctly, the presence of shmget and Linux is not sufficient for
determining an adequate level of sysv support.


Samuel K. Gutierrez
Los Alamos National Laboratory
On May 2, 2010, at 7:48 AM, N.M. Maclaren wrote:
> On May 2 2010, Ashley Pittman wrote:
>> On 2 May 2010, at 04:03, Samuel K. Gutierrez wrote:
>> As to performance there should be no difference in use between sys- 
>> V shared memory and file-backed shared memory, the instructions  
>> issued and the MMU flags for the page should both be the same so  
>> the performance should be identical.
> Not necessarily, and possibly not so even for far-future Linuces.
> On at least one system I used, the poxious kernel wrote the complete
> file to disk before returning - all right, it did that for System V
> shared memory, too, just to a 'hidden' file!  But, if I recall, on
> another it did that only for file-backed shared memory - however, it's
> a decade ago now and I may be misremembering.
> Of course, that's a serious issue mainly for large segments.  I was
> using multi-GB ones.  I don't know how big the ones you need are.
>> The one area you do need to keep an eye on for performance is on  
>> numa machines where it's important which process on a node touches  
>> each page first, you can end up using different areas (pages, not  
>> regions) for communicating in different directions between the same  
>> pair of processes. I don't believe this is any different to mmap  
>> backed shared memory though.
> On some systems it may be, but in bizarre, inconsistent, undocumented
> and unpredictable ways :-(  Also, there are usually several system  
> (and
> sometimes user) configuration options that change the behaviour, so  
> you
> have to allow for that.  My experience of trying to use those is that
> different uses have incompatible requirements, and most of the  
> critical
> configuration parameters apply to ALL uses!
> In my view, the configuration variability is the number one nightmare
> for trying to write portable code that uses any form of shared memory.
> ARMCI seem to agree.
>>> Because of this, sysv support may be limited to Linux systems -  
>>> that is,
>>> until we can get a better sense of which systems provide the shmctl
>>> IPC_RMID behavior that I am relying on.
> And, I suggest, whether they have an evil gotcha on one of the areas  
> that
> Ashley Pittman noted.
> Regards,
> Nick Maclaren.
> _______________________________________________
> devel mailing list
> devel_at_[hidden]