On 8/25/06, Sven Stork <stork_at_[hidden]> wrote:
> Hello Miguel,
> this is caused by the shared memory mempool. Per default this shared
> mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
> reduce size e.g.
> mpirun -mca mpool_sm_size <SIZE> ...
mpirun -mca mpool_sm_size 0
is acceptable ?
to what will it fallback ? sockets? pipes? tcp? smoke signals?
thankyou very much by the fast answer.
> On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
> > Hi there,
> > I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
> > chroot environment on that same machine.
> > (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> > In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
> > (virtual address space usage) for each MPI process.
> > In my case this is quite troublesome because my application in 32bit
> mode is
> > counting on using the whole 4GB address space for the problem set size
> > associated data.
> > This means that I have a reduction in the size of the problems which it
> > solve.
> > (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
> > effectively the 4GB address space)
> > Is there a way to tweak this overhead, by configuring openmpi to use
> > buffers, or anything else ?
> > I do not see this with mpich2.
> > Best regards,
> > --
> > Miguel Sousa Filipe
> users mailing list
Miguel Sousa Filipe