Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] RFC: make mpi_leave_pinned=1 the default
From: Brian W. Barrett (brbarret_at_[hidden])
Date: 2008-07-03 11:02:14


As long as we don't go back to libptmalloc2 linked into libmpi, I don't
have strong objections.

Brian

On Thu, 3 Jul 2008, Jeff Squyres wrote:

> WHAT: make mpi_leave_pinned=1 by default when a BTL is used that would
> benefit from it (when possible; 0 when not, obviously)
>
> WHY: Several reasons:
> - we continually get beat up because of "lower performance" on benchmarks by
> default (I get beat up, at least ;-) )
> - ptmalloc is no longer compiled in user apps by default, but mallopt may be
> available
> - ptmalloc has been linked in on many platforms by default for a long time
> - our ptmalloc settings were such that memory was rarely returned to the OS
> -- quite similar to mallopt
> - very few people have complained about the above policy
> - therefore, it may be ok to mallopt by default if there is a device in the
> run that would benefit from it
>
> WHERE: openib BTL, MPI runtime directory
>
> WHEN: before v1.3 ships
>
> TIMEOUT: Fri, July 11, 2008
>
> ----------------------
>
> I'm assuming that this topic will generate a fair amount of conversation.
> :-)
>
> I'm basically getting tired of people complaining that OMPI has lower default
> benchmark performance on OpenFabrics networks. I don't mind explaining the
> mpi_leave_pinned flag; what I do mind is that customers and users who refuse
> to use it (which is at least sort of understandable). I also mind that other
> MPI implementations (sometimes knowingly) compare Open MPI without
> leave_pinned to their implementations with leave_pinned. Explaining it after
> the fact is never quite as compelling when there is a big poster on a show
> floor showing MPI XYZ with great ping pong performance and OMPI with lousy
> ping pong performance.
>
> Note that:
>
> - OMPI is the only MPI that doesn't do the "leave pinned" trick by default on
> OpenFabrics networks
> - I know that pingpong benchmarks are meaningless. Customers and users don't
> care. We cannot move this mountain.
> - I know that leave_pinned is frequently meaningless to real apps (although
> Torsten likes to argue otherwise -- and he's got at least some real-world
> data points that show otherwise :-) ).
> - I know that it's only OpenFabrics networks that require this setting and
> that many people think OpenFabrics is broken because of this. Let's leave
> such religious arguments at the door; I'm not happy we have to do it either,
> but that's not the issue here.
>
> So my proposal is to enable mpi_leave_pinned by default:
>
> - if there's a BTL in the app that would benefit (i.e., openib). This would
> likely entail adding some clever callback from the openib BTL init, or
> somesuch (I have not thought this through yet).
> - mallopt or ptmalloc is available
>
> Comments?
>
>