Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] SM init failures
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-03-31 11:00:39


On Mar 31, 2009, at 3:45 AM, Sylvain Jeaugey wrote:

> Sorry to continue off-topic but going to System V shm would be for me
> like going back in the past.
>
> System V shared memory used to be the main way to do shared memory on
> MPICH and from my (little) experience, this was truly painful :
> - Cleanup issues : does shmctl(IPC_RMID) solve _all_ cases ? (even
> kill
> -9 ?)
> - Naming issues : shm segments identified as 32 bits key potentially
> causing conflicts between applications or layers of the same
> application
> on one node
> - Space issues : the total shm size on a system is bound to
> /proc/sys/kernel/shmmax, needing admin configuration and causing
> conflicts
> between MPI applications running on the same node
>

Indeed. The one saving grace here is that the cleanup issues
apparently can be solved on Linux with a special flag that indicates
"automatically remove this shmem when all processes attaching to it
have died." That was really the impetus for [re-]investigating sysv
shm. I, too, remember the sysv pain because we used it in LAM, too...

-- 
Jeff Squyres
Cisco Systems