Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Brian W. Barrett (bbarrett_at_[hidden])
Date: 2007-01-22 13:03:15

On Jan 22, 2007, at 10:39 AM, Greg Watson wrote:

> On Jan 22, 2007, at 9:48 AM, Ralph H Castain wrote:
>> On 1/22/07 9:39 AM, "Greg Watson" <gwatson_at_[hidden]> wrote:
>>> I tried adding '-mca btl ^sm -mca mpi_preconnect_all 1' to the
>>> mpirun
>>> command line but it still fails with identical error messages.
>>> I don't understand the issue with allocating nodes under bproc.
>>> Older
>>> versions of OMPI have always just queried bproc for the nodes that
>>> have permissions set so I can execute on them. I've never had to
>>> allocate any nodes using a hostfile or any other mechanism. Are you
>>> saying that this no longer works?
>> Turned out that mode of operation was a "bug" that caused all
>> kinds of
>> problems in production environments - that's been fixed for quite
>> some time.
>> So, yes - you do have to get an official "allocation" of some kind.
>> Even the
>> changes I mentioned wouldn't remove that requirement in the way you
>> describe.
> BTW, there's no requirement for a bproc system to employ a job
> scheduler. So in my view OMPI is "broken" for bproc systems if it
> imposes such a requirement.

I agree that the present assumption that BProc requires LSF be in use
is broken and we have a fix for that shortly. However, we still will
require a resource allocator of some sort (even a hostfile should
work) to tell us which nodes to run on. It should be possible to
write a resource allocator that just grabs nodes out of the available
pool returned by the bproc status functions should be possible, but I
don't believe that's on the to-do list in the near future...


   Brian Barrett
   Open MPI Team, CCS-1
   Los Alamos National Laboratory