Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] problem when binding to socket on a single socket node
From: Ralph Castain (rhc_at_[hidden])
Date: 2010-04-12 14:18:41

Let me put this succinctly - I DO NOT CARE!

I wrote this stuff, warning you folks from Sun in particular that you were opening a can of worms. As I said then, I'll do it once, but the vast range of corner cases will make this a nightmare that I will NOT continue to chase.

Welcome to YOUR nightmare. :-)

On Apr 12, 2010, at 12:11 PM, Eugene Loh wrote:

> Ralph Castain wrote:
>> If someone tells us -bind-to-socket, but there is only one socket, then we really cannot bind them to anything. Any check by their code would reveal that they had not, in fact, been bound - raising questions as to whether or not OMPI is performing the request. Our operating standard has been to error out if the user specifies something we cannot do to avoid that kind of confusion. This is what generated the code in the system today.
>> Now I can see an argument that -bind-to-socket with one socket maybe shouldn't generate an error, but that decision then has to get reflected in other code areas as well.
>> But first we need to resolve the question: should this scenario return an error or not?
> Okay, so my bind-to-board example didn't pass muster. How about this one? This is a node with 8 cores: 0-7:
> % mpirun -H mynode -n 1 -slot-list 0-7 -report-bindings hostname
> [mynode:27978] [[17644,0],0] odls:default:fork binding child [[17644,1],0] to slot_list 0-7
> mynode
> I bind to all cores. mpirun does not complain. Indeed, it reports that I'm bound to all cores.
> _______________________________________________
> devel mailing list
> devel_at_[hidden]