Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Possible OMPI 1.6.5 bug? SEGV in malloc.c
From: Christopher Samuel (samuel_at_[hidden])
Date: 2013-08-30 02:01:12

Hash: SHA1

Hiya Jeff,

On 30/08/13 11:13, Jeff Squyres (jsquyres) wrote:

> FWIW, the stack traces you sent are not during MPI_INIT.

I did say it was a suspicion. ;-)

> What happens with OMPI's memory manager is that it inserts itself
> to be *the* memory allocator for the entire process before main()
> even starts. We have to do this as part of the horribleness of
> that is OpenFabrics/verbs and how it just doesn't match the MPI
> programming model at all. :-( (I think I wrote some blog entries
> about this a while ago... Ah, here's a few:

Thanks! I'll take a look next week (just got out of a 5.5 hour
meeting and have to head home now).

> Therefore, (in C) if you call malloc() before MPI_Init(), it'll be
> calling OMPI's ptmalloc. The stack traces you sent imply that
> it's just when your app is calling the fortran allocate -- which is
> after MPI_Init().

OK, that makes sense.

> FWIW, you can build OMPI with --without-memory-manager, or you can
> setenv OMPI_MCA_memory_linux_disable to 1 (note: this is NOT a
> regular MCA parameter -- it *must* be set in the environment
> before the MPI app starts). If this env variable is set, OMPI will
> *not* interpose its own memory manager in the pre-main hook. That
> should be a quick/easy way to try with and without the memory
> manager and see what happens.

Well with OMPI_MCA_memory_linux_disable=1 I don't get the crash at all,
or the spin with the Intel compiler build. Nice!

Thanks for this, I'll take a look further next week..

Very much obliged,
- --
 Christopher Samuel Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545

Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -