Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Bill McMillan (bmcmillan_at_[hidden])
Date: 2007-07-23 17:30:59


Sorry for the delay in replying.

>first of all, thanks for the info bill! i think i'm really starting to
piece things together now. you are right in
>that i'm working with a 6.x (6.2 with 6.1 devel libs ;) install here at
cadence, without the HPC extensions AFAIK.
>also, i think that are customers are mostly in the same position -- i
assume that the HPC extensions cost extra?
>or perhaps admins just don't bother to install them.

 Since most apps in EDA are sequential, most admins haven't installed
the extensions

>i'll try to gather more data, but my feeling it that the market
penetration of both HPC and LSF 7.0 is low in our
>marker (EDA vendors and customers). i'd love to just stall until 7.0 is
widely available, but perhaps in the mean
>time it would be nice to have some backward support for LSF 6.0 'base'.
it seems like supporting LSF 6.x /w HPC
>might not be too useful, since:
>a) it's not clear that the 'built in' "bsub -n N -a openmpi foo"
>support will work with an MPI-2 dynamic-spawning application like mine
(or does it?),

 From an LSF perspective, you get allocated N slots, and how the
application uses them is pretty much at its
 discretion. So in this case it would start orted on each allocated
node, and you can create whatever
 dynamic processes you like from your openmpi app within that

 At present the actual allocation is fixed, but there will be support
for changing the actual allocation
 in a forthcoming release.

>b) i've heard that manually interfacing with the parallel application
manager directly is tricky?

 If you don't use the supplied methods (such as the -a openmpi method)
then it can be a little tricky to
 set it up the first time.

>c) most importantly, it's not clear than any of our customers have the
HPC support, and certainly not all of them,
>so i need to support LSF 6.0 'base' anyway -- it only needs to work
until 7.0 is widely available (< 1 year? i really
>have no idea ... will Platform end support for 6.x at some particular
time? or otherwise push customers to upgrade? perhaps
>cadence can help there too ...) .

 The -a openmpi method works with LSF 6.x; and will be supported till at
least the end of the decade. It sounds like
 the simplest solution may be to make the hpc extensions available as a
patch kit for everyone.

>1) use bsub -n N, followed by N-1 ls_rtaske() calls (or similar).
>while ls_rtaske() may not 'force' me to follow the queuing rules, if i
only launch on the proper machines, i should be okay,
>right? i don't think IO and process marshaling (i'm not sure exactly
what you mean by
>that) are a problem since openmpi/orted handles those issues, i think?

 Yes it will work, however it has two drawbacks:
 * In this model you essentially become responsible for error handling
if a remote task dies, and cleaning up gracefully if
   the master process dies
 * From a process accounting (and hence scheduling) point of view,
resources consumed by the remote tasks are not attributed
   to the master task.
 The -a openmpi method (and blaunch) handles both these cases.

>2) use only bsub's of single processes, using some initial wrapper
script that bsub's all the jobs (master + N-1 slaves)
>needed to reach the desired static allocation for openmpi. this seems
to be what my internal guy is suggesting is 'required'.

 Again, this will work, tho you may not be too popular with your cluster
admin if you are holding onto N-1 cpus while waiting
 for the Nth to be allocated. Method (1) would be viewed as a true
parallel job and could be backfilled, while (2) is just
 a lose collection of sequential tasks. This would also suffer from the
same drawbacks as (1).

 If your application could start with just 1 cpu then deal with the rest
as they are added, then you keep the cluster admin

 I guess this discussion is becoming very LSF specific, if you would
prefer to discuss it offline please let me know.