Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] SM init failures
From: Iain Bason (Iain.Bason_at_[hidden])
Date: 2009-04-01 14:20:14


On Mar 31, 2009, at 11:00 AM, Jeff Squyres wrote:

> On Mar 31, 2009, at 3:45 AM, Sylvain Jeaugey wrote:
>
>> Sorry to continue off-topic but going to System V shm would be for me
>> like going back in the past.
>>
>> System V shared memory used to be the main way to do shared memory on
>> MPICH and from my (little) experience, this was truly painful :
>> - Cleanup issues : does shmctl(IPC_RMID) solve _all_ cases ? (even
>> kill
>> -9 ?)
>> - Naming issues : shm segments identified as 32 bits key potentially
>> causing conflicts between applications or layers of the same
>> application
>> on one node
>> - Space issues : the total shm size on a system is bound to
>> /proc/sys/kernel/shmmax, needing admin configuration and causing
>> conflicts
>> between MPI applications running on the same node
>>
>
> Indeed. The one saving grace here is that the cleanup issues
> apparently can be solved on Linux with a special flag that indicates
> "automatically remove this shmem when all processes attaching to it
> have died." That was really the impetus for [re-]investigating sysv
> shm. I, too, remember the sysv pain because we used it in LAM, too...

What about the other issues? I remember those being a PITA about 15
to 20 years ago, but obviously a lot could have improved in the
meantime.

Iain