Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Miguel Figueiredo Mascarenhas Sousa Filipe (miguel.filipe_at_[hidden])
Date: 2006-08-25 09:40:33


Hi,

On 8/25/06, Sven Stork <stork_at_[hidden]> wrote:
>
> Hello Miguel,
>
> this is caused by the shared memory mempool. Per default this shared
> memory
> mapping has a size of 512 MB. You can use the "mpool_sm_size" parameter to
> reduce size e.g.
>
> mpirun -mca mpool_sm_size <SIZE> ...

using
mpirun -mca mpool_sm_size 0
is acceptable ?
to what will it fallback ? sockets? pipes? tcp? smoke signals?

thankyou very much by the fast answer.

Thanks,
> Sven
>
> On Friday 25 August 2006 15:04, Miguel Figueiredo Mascarenhas Sousa Filipe
> wrote:
> > Hi there,
> > I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit
> x86
> > chroot environment on that same machine.
> > (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> >
> > In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory
> usage
> > (virtual address space usage) for each MPI process.
> >
> > In my case this is quite troublesome because my application in 32bit
> mode is
> > counting on using the whole 4GB address space for the problem set size
> and
> > associated data.
> > This means that I have a reduction in the size of the problems which it
> can
> > solve.
> > (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and
> use
> > effectively the 4GB address space)
> >
> >
> > Is there a way to tweak this overhead, by configuring openmpi to use
> smaller
> > buffers, or anything else ?
> >
> > I do not see this with mpich2.
> >
> > Best regards,
> >
> > --
> > Miguel Sousa Filipe
> >
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Miguel Sousa Filipe