Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Ralph H Castain (rhc_at_[hidden])
Date: 2007-07-24 11:39:00


Yo all

As you know, I am working on revamping the hostfile functionality to make it
work better with managed environments (at the moment, the two are
exclusive). The issue that we need to review is how we want the interaction
to work, both for the initial launch and for comm_spawn.

In talking with Jeff, we boiled it down to two options that I have
flow-charted (see attached):

Option 1: in this mode, we read any allocated nodes provided by a resource
manager (e.g., SLURM). These nodes establish a base pool of nodes that can
be used by both the initial launch and any dynamic comm_spawn requests. The
hostfile and any -host info is then used to select nodes from within that
pool for use with the specific launch. The initial launch would use the
-hostfile or -host command line option to provide that info - comm_spawn
would use the MPI_Info fields to provide similar info.

This mode has the advantage of allowing a user to obtain a large allocation,
and then designate hosts within the pool for use by an initial application,
and separately designate (via another hostfile or -host spec) another set of
those hosts from the pool to support a comm_spawn'd child job.

If no resource managed nodes are found, then the hostfile and -host options
would provide the list of hosts to be used. Again, comm_spawn'd jobs would
be able to specify their own hostfile and -host nodes.

The negative to this option is complexity - in the absence of a managed
allocation, I either have to deal with hostfile/dash-host allocations in the
RAS and then again in RMAPS, or I have "allocation-like" functionality
happening in RMAPS.

Option 2: in this mode, we read any allocated nodes provided by a resource
manager, and then filter those using the command line hostfile and -host
options to establish our base pool. Any spawn commands (both the initial one
and comm_spawn'd child jobs) would utilize this filtered pool of nodes.
Thus, comm_spawn is restricted to using hosts from that initial pool.

We could possibly extend this option by only using the hostfile in our
initial filter. In other words, let the hostfile downselect the resource
manager's allocation for the initial launch. Any -host options on the
command line would only apply to the hosts used to launch the initial
application. Any comm_spawn would use the hostfile-filtered pool of hosts.

The advantage here is simplicity. The disadvantage lies in flexibility for
supporting dynamic operations.

The major difference between these options really only impacts the initial
pool of hosts to be used for launches, both the initial one and any
subsequent comm_spawns. Barring any commentary, I will implement option 1 as
this provides the maximum flexibility.

Any thoughts? Other options we should consider?

Thanks
Ralph