Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] System V Shared Memory for Open MPI: Request forCommunity Input and Testing
From: Sylvain Jeaugey (sylvain.jeaugey_at_[hidden])
Date: 2010-06-10 03:47:50


On Wed, 9 Jun 2010, Jeff Squyres wrote:

> On Jun 9, 2010, at 3:26 PM, Samuel K. Gutierrez wrote:
>
>> System V shared memory cleanup is a concern only if a process dies in
>> between shmat and shmctl IPC_RMID. Shared memory segment cleanup
>> should happen automagically in most cases, including abnormal process
>> termination.
>
> Umm... right. Duh. I knew that.
>
> Really.
>
> So -- we're good!
>
> Let's open the discussion of making sysv the default on systems that support the IPC_RMID behavior (which, AFAIK, is only Linux)...
I'm sorry, but I think System V has many disadvantages over mmap.

1. As discussed before, cleaning is not as easy as for a file. It is a
good thing to remove the shm segment after creation, but since problems
often happen during shmget/shmat, there's still a high risk of letting
things behind.

2. There are limits in the kernel you need to grow (kernel.shmall,
kernel.shmmax). On most linux distribution, shmmax is 32MB, which does
not permit the sysv mechanism to work. Mmapped files are unlimited.

3. Each shm segment is identified by a 32 bit integer. This namespace is
small (and non-intuitive, as opposed to a file name), and the probability
for a collision is not null, especially when you start creating multiple
shared memory segments (for collectives, one-sided operations, ...).

So, I'm a bit reluctant to work with System V mechanisms again. I don't
think there is a *real* reason for System V to be faster than mmap, since
it should just be memory. I'd rather find out why mmap is slower.

Sylvain