Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Heap profiling with OpenMPI
From: Jan Ploski (Jan.Ploski_at_[hidden])
Date: 2008-08-07 03:20:26

users-bounces_at_[hidden] schrieb am 08/06/2008 07:44:03 PM:

> On Aug 6, 2008, at 12:37 PM, Jan Ploski wrote:
> >> I'm using the latest of Open MPI compiled with debug turned on, and
> >> valgrind 3.3.0. From your trace it looks like there is a conflict
> >> between two memory managers. I'm not having the same problem as I
> >> disable the Open MPI memory manager on my builds (configure option
> >> --without-memory-manager).
> >
> > Thanks for the tip! I confirm that the problem goes away after
> > rebuilding --without-memory-manager.
> >
> > As I also have the same problem in another cluster, I'd like to know
> > what side effects using this configuration option might have before
> > suggesting it as a solution to that cluster's admin. I didn't find
> > an explanation of what it does in the FAQ (beyond a recommendation
> > to use it for static builds). Could you please explain this option,
> > especially why one might want to *not* use it?
> This is on my to-do list (add this to the FAQ); sorry it isn't done yet.
> Here's a recent post where I've explained it a bit more:
> Let me know if you'd like to know more.


Thanks for this explanation. According to what you wrote,
--without-memory-manager can make my and other applications run
significantly slower. While I can find out just how much for my app, I
hardly can do it for other (unknown) users. So it would be nice if my heap
profiling problem could be resolved in another way in the future. Is the
planned mpi_leave_pinned change in v1.3 going to correct it?

Jan Ploski

Dipl.-Inform. (FH) Jan Ploski
FuE Bereich Energie | R&D Division Energy
Escherweg 2  - 26121 Oldenburg - Germany
Phone/Fax: +49 441 9722 - 184 / 202
E-Mail: Jan.Ploski_at_[hidden]