I have a simple, 1-process test case that gets stuck on the mpi_finalize call. The test case is a dead-simple calculation of pi - 50 lines of Fortran. The process gradually consumes more and more memory until the system becomes unresponsive and needs to be rebooted, unless the job is killed first.
In the output, attached, I see the warning message about OpenFabrics being configured to only allow registering part of physical memory. I’ve tried to chase this down with my administrator to no avail yet. (I am aware of the relevant FAQ entry.) A different installation of MPI on the same system, made with a different compiler, does not produce the OpenFabrics memory registration warning – which seems strange because I thought it was a system configuration issue independent of MPI. Also curious in the output is that LSF seems to think there are 7 processes and 11 threads associated with this job.
The particulars of my configuration are attached and detailed below. Does anyone see anything potentially problematic?
OpenMPI Version: 1.6.5Compiler: GCC 4.6.1OS: SuSE Linux Enterprise Server 10, Patchlevel 2uname –a : Linux lxlogin2 184.108.40.206-0.21-smp #1 SMP Tue May 6 12:41:02 UTC 2008 x86_64 x86_64 x86_64 GNU/Linux
Execution command: (executed via LSF – effectively “mpirun –np 1 test_program”)
users mailing list