Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] [patch] MPI-2.2: Ordering of attribution deletion callbacks on MPI_COMM_SELF
From: KAWASHIMA Takahiro (rivis.kawashima_at_[hidden])
Date: 2013-01-17 18:09:34


OK. I'll try implementing George's idea and then you can compare which
one is simpler.


> Not that I'm aware of; that would be great.
> Unlike George, however, I'm not concerned about converting to linear operations for attributes.
> Attributes are not used often, but when they are:
> a) there aren't many of them (so a linear penalty is trivial)
> b) they're expected to be low performance
> So if it makes the code simpler, I certainly don't mind linear operations.
> On Jan 17, 2013, at 9:32 AM, KAWASHIMA Takahiro <rivis.kawashima_at_[hidden]>
> wrote:
> > George,
> >
> > Your idea makes sense.
> > Is anyone working on it? If not, I'll try.
> >
> > Regards,
> > KAWASHIMA Takahiro
> >
> >> Takahiro,
> >>
> >> Thanks for the patch. I deplore the lost of the hash table in the attribute management, as the potential of transforming all attributes operation to a linear complexity is not very appealing.
> >>
> >> As you already took the decision C, it means that at the communicator destruction stage the hash table is not relevant anymore. Thus, I would have converted the hash table to an ordered list (ordered by the creation index, a global entity atomically updated every time an attribute is created), and proceed to destroy the attributed in the desired order. Thus instead of having a linear operation for every operation on attributes, we only have a single linear operation per communicator (and this during the destruction stage).
> >>
> >> George.
> >>
> >> On Jan 16, 2013, at 16:37 , KAWASHIMA Takahiro <rivis.kawashima_at_[hidden]> wrote:
> >>
> >>> Hi,
> >>>
> >>> I've implemented ticket #3123 "MPI-2.2: Ordering of attribution deletion
> >>> callbacks on MPI_COMM_SELF".
> >>>
> >>>
> >>>
> >>> As this ticket says, attributes had been stored in unordered hash.
> >>> So I've replaced opal_hash_table_t with opal_list_t and made necessary
> >>> modifications for it. And I've also fixed some multi-threaded concurrent
> >>> (get|set|delete)_attr call issues.
> >>>
> >>> By this modification, following behavior changes are introduced.
> >>>
> >>> (A) MPI_(Comm|Type|Win)_(get|set|delete)_attr function may be slower
> >>> for MPI objects that has many attributes attached.
> >>> (B) When the user-defined delete callback function is called, the
> >>> attribute is already removed from the list. In other words,
> >>> if MPI_(Comm|Type|Win)_get_attr is called by the user-defined
> >>> delete callback function for the same attribute key, it returns
> >>> flag = false.
> >>> (C) Even if the user-defined delete callback function returns non-
> >>> MPI_SUCCESS value, the attribute is not reverted to the list.
> >>>
> >>> (A) is due to a sequential list search instead of a hash. See find_value
> >>> function for its implementation.
> >>> (B) and (C) are due to an atomic deletion of the attribute to allow
> >>> multi-threaded concurrent (get|set|delete)_attr call in MPI_THREAD_MULTIPLE.
> >>> See ompi_attr_delete function for its implementation. I think this does
> >>> not matter because MPI standard doesn't specify behavior in such cases.
> >>>
> >>> The patch for Open MPI trunk is attached. If you like it, take in
> >>> this patch.
> >>>
> >>> Though I'm a employee of a company, this is my independent and private
> >>> work at my home. No intellectual property from my company. If needed,
> >>> I'll sign to Individual Contributor License Agreement.