Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: George Bosilca (bosilca_at_[hidden])
Date: 2006-08-25 09:29:12


I suspect this is the shared memory used to communicate between processes.
Please run your application adding the flag "--mca btl tcp,self" to the
mpirun command line (*before the application name). If the virtual memory
usage goes down then the 400MB are definitively comming from the shared
memory and there are ways to limit this amount
(http://www.open-mpi.org/faq/?category=tuning provide a full range of
options).

Otherwise ... we will have to find out where they come from differently.

   Thanks,
     george.

On Fri, 25 Aug 2006, Miguel Figueiredo Mascarenhas Sousa Filipe wrote:

> Hi there,
> I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
> chroot environment on that same machine.
> (distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
>
> In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
> (virtual address space usage) for each MPI process.
>
> In my case this is quite troublesome because my application in 32bit mode is
> counting on using the whole 4GB address space for the problem set size and
> associated data.
> This means that I have a reduction in the size of the problems which it can
> solve.
> (my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
> effectively the 4GB address space)
>
>
> Is there a way to tweak this overhead, by configuring openmpi to use smaller
> buffers, or anything else ?
>
> I do not see this with mpich2.
>
> Best regards,
>
>

"We must accept finite disappointment, but we must never lose infinite
hope."
                                   Martin Luther King