Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Greg Watson (gwatson_at_[hidden])
Date: 2007-01-22 12:39:55


On Jan 22, 2007, at 9:48 AM, Ralph H Castain wrote:

>
>
>
> On 1/22/07 9:39 AM, "Greg Watson" <gwatson_at_[hidden]> wrote:
>
>> I tried adding '-mca btl ^sm -mca mpi_preconnect_all 1' to the mpirun
>> command line but it still fails with identical error messages.
>>
>> I don't understand the issue with allocating nodes under bproc. Older
>> versions of OMPI have always just queried bproc for the nodes that
>> have permissions set so I can execute on them. I've never had to
>> allocate any nodes using a hostfile or any other mechanism. Are you
>> saying that this no longer works?
>
> Turned out that mode of operation was a "bug" that caused all kinds of
> problems in production environments - that's been fixed for quite
> some time.
> So, yes - you do have to get an official "allocation" of some kind.
> Even the
> changes I mentioned wouldn't remove that requirement in the way you
> describe.

BTW, there's no requirement for a bproc system to employ a job
scheduler. So in my view OMPI is "broken" for bproc systems if it
imposes such a requirement.

Greg