This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I'm using openmpi-1.1 on a linux-amd64 machine and also a linux-32bit x86
chroot environment on that same machine.
(distro is gentoo, compilers: gcc-4.1.1 and gcc-3.4.6)
In both cases openmpi-1.1 shows a +/-400MB overhead in virtual memory usage
(virtual address space usage) for each MPI process.
In my case this is quite troublesome because my application in 32bit mode is
counting on using the whole 4GB address space for the problem set size and
This means that I have a reduction in the size of the problems which it can
(my aplication isn't 64bit safe yet, so I need to run in 32bit mode, and use
effectively the 4GB address space)
Is there a way to tweak this overhead, by configuring openmpi to use smaller
buffers, or anything else ?
I do not see this with mpich2.
Miguel Sousa Filipe