Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] [OMPI svn] svn:open-mpi r30571 - trunk/ompi/runtime
From: Ralph Castain (rhc_at_[hidden])
Date: 2014-02-06 12:27:40


Kewl - I'll add it in the next wave. Meantime, we can revert this one

Thanks!
Ralph

On Feb 6, 2014, at 9:18 AM, Joshua Ladd <joshual_at_[hidden]> wrote:

> It’s been CMRed, but scheduled for 1.7.5
>
> https://svn.open-mpi.org/trac/ompi/ticket/4185
>
> From: devel [mailto:devel-bounces_at_[hidden]] On Behalf Of Mike Dubman
> Sent: Thursday, February 06, 2014 12:17 PM
> To: Open MPI Developers
> Subject: Re: [OMPI devel] [OMPI svn] svn:open-mpi r30571 - trunk/ompi/runtime
>
> It seems that similar code in not in v1.7 tree.
>
>
> On Thu, Feb 6, 2014 at 2:40 PM, George Bosilca <bosilca_at_[hidden]> wrote:
> This commit is unnecessary. The call to delete_proc is already there, few lines above your own patch. It was introduced on Jan 26 2014 with commit https://svn.open-mpi.org/trac/ompi/changeset/30430.
>
> George.
>
>
>
> On Feb 6, 2014, at 09:38 , svn-commit-mailer_at_[hidden] wrote:
>
> > Author: miked (Mike Dubman)
> > Date: 2014-02-06 03:38:32 EST (Thu, 06 Feb 2014)
> > New Revision: 30571
> > URL: https://svn.open-mpi.org/trac/ompi/changeset/30571
> >
> > Log:
> > OMPI: add call to del_procs
> >
> > fixed by AlexM, reviewed by miked
> > cmr=v1.7.5:reviewer=ompi-rm1.7
> >
> > Text files modified:
> > trunk/ompi/runtime/ompi_mpi_finalize.c | 15 +++++++++++++++
> > 1 files changed, 15 insertions(+), 0 deletions(-)
> >
> > Modified: trunk/ompi/runtime/ompi_mpi_finalize.c
> > ==============================================================================
> > --- trunk/ompi/runtime/ompi_mpi_finalize.c Wed Feb 5 17:49:26 2014 (r30570)
> > +++ trunk/ompi/runtime/ompi_mpi_finalize.c 2014-02-06 03:38:32 EST (Thu, 06 Feb 2014) (r30571)
> > @@ -94,6 +94,9 @@
> > opal_list_item_t *item;
> > struct timeval ompistart, ompistop;
> > ompi_rte_collective_t *coll;
> > + ompi_proc_t** procs;
> > + size_t nprocs;
> > +
> >
> > /* Be a bit social if an erroneous program calls MPI_FINALIZE in
> > two different threads, otherwise we may deadlock in
> > @@ -150,6 +153,18 @@
> > MPI lifetime, to get better latency when not using TCP */
> > opal_progress_event_users_increment();
> >
> > +
> > + if (NULL == (procs = ompi_proc_world(&nprocs))) {
> > + return OMPI_ERROR;
> > + }
> > +
> > + if (OMPI_SUCCESS != (ret = MCA_PML_CALL(del_procs(procs, nprocs)))) {
> > + free(procs);
> > + return ret;
> > + }
> > + free(procs);
> > +
> > +
> > /* check to see if we want timing information */
> > if (ompi_enable_timing != 0 && 0 == OMPI_PROC_MY_NAME->vpid) {
> > gettimeofday(&ompistart, NULL);
> > _______________________________________________
> > svn mailing list
> > svn_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/svn
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel