Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] Hang in collectives involving shared memory
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-06-10 15:15:35


Well, it would - except then -all- the procs would run real slow! :-)

Still, might be a reasonable diagnostic step to try...will give it a shot.

On Wed, Jun 10, 2009 at 1:12 PM, Bogdan Costescu <
Bogdan.Costescu_at_[hidden]> wrote:

> On Wed, 10 Jun 2009, Ralph Castain wrote:
>
> I appreciate the input and have captured it in the ticket. Since this
>> appears to be a NUMA-related issue, the lack of NUMA support in your setup
>> makes the test difficult to interpret.
>>
>
> Based on this reasoning, disabling libnuma support in your OpenMPI build
> should also solve the problem, or do I interpret things the wrong way ?
>
>
> --
> Bogdan Costescu
>
> IWR, University of Heidelberg, INF 368, D-69120 Heidelberg, Germany
> Phone: +49 6221 54 8240, Fax: +49 6221 54 8850
> E-mail: bogdan.costescu_at_[hidden]
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>