This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Dear Open MPI developer,
there are often problems with the user limit for the stack size (ulimit -s) on
Linux if running Fortran and/or OpenMP(=hybride) programs.
In one case we have seen the user has set the stack size in his environment by
occasion far too high - to about one TeraByte (on nodes with less than 100Gb RAM).
It turned out that Open MPI (1.6.1) cannot use InfiniBand in this environment
(cannot activate IB card / register memory / something else because of lack of
virtual memory - all memory reserved for the virtual stack?). The job seem to
fail back and run over IPoIB, according to achieved bandwidth.
The problem was that there was no single word of caution printed out, whereby
Open MPI usually warns the user iff an seemingly available high performance
network cannot be used, AFAIK. Thus the problem of the user - 15x bandwidth and
performance loss - was covered for many weeks and found only by occasion.
So, what's going wrong [if any]?
Reproducing: try to set the 'ulimit -s' in your environment to an astronomic
value, or use the attached wrapper.
$MPI_ROOT/bin/mpiexec -mca oob_tcp_if_include ib0 -mca btl_tcp_if_include ib0
-np 2 -H linuxbdc01,linuxbdc02 /home/pk224850/bin/ulimit_high.sh MPI_FastTest.exe
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915