Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2005-10-06 12:19:29


On Oct 6, 2005, at 11:57 AM, Borenstein, Bernard S wrote:

> I built the Nasa Overflow 1.8ab code yesterday with
> openmpi-1.0a1r7632.  It runs fine with 4 or 8 opteron processors on a
> myrinet linux cluster.

> But if I increase the number of processors to 20, I get errors like
> this :

Thanks for doing this testing! Our gm code should be much more stable
than it was last week (some critical bug fixes got in earlier this
week), so I'm disappointed that you're still seeing failures. :-(

Can we download the NASA Overflow code and try it with your input to
try to replicate the failures? I don't see an obvious download link on
http://rotorcraft.arc.nasa.gov/cfd/CFD4/New_Page/Overflow-D2.htm, but I
have dim recollections that there are some restrictions on obtaining
this code...?

> [e053:01260] *** An error occurred in MPI_Free_mem
> [e030:15585] *** An error occurred in MPI_Free_mem
> [e013:27621] *** An error occurred in MPI_Free_mem

Interesting -- does Overflow test for whether an MPI has MPI_ALLOC_MEM
and MPI_Free_mem? I ask because I'm guessing that this code runs
properly with MPICH-gm -- but I'm *pretty sure* that MPICH-gm does not
have MPI_ALLOC_MEM / MPI_FREE_MEM (don't quote me on that, though).

But then again, there doesn't seem to be a good reason to say "out of
memory" when in MPI_FREE_MEM. :-) So if we could replicate the
problem, that would probably be the most helpful -- how would we obtain
this software?

-- 
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/