I can't think of what OMPI would be doing related to the predefined
stack size -- I am not aware of anywhere in the code where we look up
the predefine stack size and then do something with it.
That being said, I don't know what the OS and resource consumption
effects are of setting 1GB+ stack size on *any* application... Have
you tried non-MPI examples, potentially with applications as large as
MPI applications but without the complexity of MPI?
On Nov 19, 2009, at 3:13 PM, David Singleton wrote:
> Depending on the setup, threads often get allocated a thread local
> stack with size equal to the stacksize rlimit. Two threads maybe?
> Terry Dontje wrote:
> > A couple things to note. First Sun MPI 8.2.1 is effectively OMPI
> > 1.3.4. I also reproduced the below issue using a C code so I
> think this
> > is a general issue with OMPI and not Fortran based.
> > I did a pmap of a process and there were two anon spaces equal to
> > stack space set by ulimit.
> > In one case (setting 102400) the anon spaces were next to each other
> > prior to all the loadable libraries. In another case (setting
> > one anon space was locate in the same area as the first case but the
> > second space was deep into some memory used by ompi.
> > Is any of this possibly related to the predefined handles? Though
> I am
> > not sure why it would expand based on stack size?.
> > --td
> >> Date: Thu, 19 Nov 2009 19:21:46 +0100
> >> From: Paul Kapinos <kapinos_at_[hidden]>
> >> Subject: [OMPI users] exceedingly virtual memory consumption of MPI
> >> environment if higher-setting "ulimit -s"
> >> To: Open MPI Users <users_at_[hidden]>
> >> Message-ID: <4B058CBA.3000105_at_[hidden]>
> >> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
> >> Hi volks,
> >> we see an exeedingly *virtual* memory consumtion through MPI
> >> if "ulimit -s" (stack size)in profile configuration was setted
> >> Furthermore we believe, every mpi process started, wastes about the
> >> double size of `ulimit -s` value which will be set in a fresh
> >> (that is, the value is configurated in e.g. .zshenv, *not* the
> >> actually setted in the console from which the mpiexec runs).
> >> Sun MPI 8.2.1, an empty mpi-HelloWorld program
> >> ! either if running both processes on the same host..
> >> .zshenv: ulimit -s 10240 --> VmPeak: 180072 kB
> >> .zshenv: ulimit -s 102400 --> VmPeak: 364392 kB
> >> .zshenv: ulimit -s 1024000 --> VmPeak: 2207592 kB
> >> .zshenv: ulimit -s 2024000 --> VmPeak: 4207592 kB
> >> .zshenv: ulimit -s 20240000 --> VmPeak: 39.7 GB!!!!
> >> (see the attached files; the a.out binary is a mpi helloworld
> >> running an never ending loop).
> >> Normally, the stack size ulimit is set to some 10 MB by us, but
> we see
> >> a lot of codes which needs *a lot* of stack space, e.g. Fortran
> >> OpenMP codes (and especially fortran OpenMP codes). Users tends to
> >> hard-code the setting-up the higher value for stack size ulimit.
> >> Normally, the using of a lot of virtual memory is no problem,
> >> there is a lot of this thing :-) But... If more than one person is
> >> allowed to work on a computer, you have to divide the ressources in
> >> such a way that nobody can crash the box. We do not know how to
> >> the real RAM used so we need to divide the RAM by means of setting
> >> virtual memory ulimit (in our batch system e.g.. That is, for us
> >> "virtual memory consumption" = "real memory consumption".
> >> And real memory is not that way cheap than virtual memory.
> >> So, why consuming the *twice* amount of stack size for each
> >> And, why consuming the virtual memory at all? We guess this virtual
> >> memory is allocated for the stack (why else it will be related to
> >> stack size ulimit). But, is such allocation really needed? Is
> there a
> >> way to avoid the vaste of virtual memory?
> >> best regards,
> >> Paul Kapinos
> users mailing list