I suspect this is the shared memory used to communicate between processes.
Please run your application adding the flag "--mca btl tcp,self" to the
mpirun command line (*before the application name). If the virtual memory
usage goes down then the 400MB are definitively comming from the shared
memory and there are ways to limit this amount
(http://www.open-mpi.org/faq/?category=tuning provide a full range of
Otherwise ... we will have to find out where they come from differently.
On Fri, 25 Aug 2006, Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
> Hi there,
> I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
> chroot environment on that same machine.
> (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
> In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
> (virtual address space usage) for each MPI process.
> In my case this is quite troublesome because my application in 32bit mode is
> counting on using the whole 4GB address space for the problem set size and
> associated data.
> This means that I have a reduction in the size of the problems which it can
> (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
> effectively the 4GB address space)
> Is there a way to tweak this overhead, by configuring openmpi to use smaller
> buffers, or anything else ?
> I do not see this with mpich2.
> Best regards,
"We must accept finite disappointment, but we must never lose infinite
Martin Luther King