Tim and I discussed this on the phone the other day and then I talked
about it more with Ethan. After all this discussion, I committed a
variant of Tim's patch into an HG for review before putting it back on
the SVN trunk:
If you have a milliways account, you can get to it via ssh and
therefore be able to push back into it. http clones won't be able to
Here's the changes I made:
1. I changed Tim's original concept a little bit: instead of having a
"local" scratch, I called it "fast scratch". The idea is that there
is a "fast" scratch space *that neither persistent nor global*. It
gets whacked (by default) at the end of the MTT invocation. It has no
protection for multiple MTT invocations using it simultaneously. Any
part of the MTT code base can use this fast scratch if they want to.
If somewhere wants to use it, they are responsible for making a "safe"
section in the fast scratch tree. The only place currently using it
is the MPI::Install phase (it makes $fast_scratch/mpi-install/ and
does stuff under there); if there's a build portion of that install,
the fast scratch can be used. As Tim set it, the builddir is set to
point into the fast scratch.
2. New [MTT] section fields:
- delete_fast_scratch: defaults to 1, but can be overridden. If 1,
the entire fast scratch tree will be rm -rf'ed when the current INI
file run is complete.
- save_fast_scratch_files: a comma-delimited list of files to be saved
from the fast scratch tree before it is whacked. It defaults to
"config.log". A Files:: subroutine (currently only invoked by
MPI::Install.pm) scans a designated sub-tree in the fast scratch and
sees if it finds any files matching those names (e.g., config.log).
If it does, it preserves the directory hierarchy it found in the fast
scratch when copying the file to the persistent scratch tree. For
example, saving config.log in an OMPI build, you'll end up with the
following in $scratch/installs/<4CHAR>/fssf (= fast scratch save files):
3. I toyed with the idea of adding an option for saving the *entire*
fast scratch MPI install tree to the permanent scratch (on the
argument that [effectively] tar'ing up the entire fast scratch and
writing it to the regular scratch would still be faster than doing all
the interactive IO to build OMPI on the regular scratch), but I have
run out of time today and therefore probably won't get to it. :-)
This could probably be featurized a bit more, but I figured that this
would be helpful to several of us and would be worth reviewing and
getting into the SVN trunk, even if we lack a few features.
What do you guys think?
On Sep 19, 2008, at 3:44 PM, Jeff Squyres wrote:
> Excellent points.
> What about a slightly different approach that would allow us to be
> exactly specific?
> And then in the INI file, have fields that indicate which scratch
> dir you want them to use. For example, the fact that the OMPI MPI
> Install plugin does a build is really a side-effect (not all MPI
> Install plugins do builds). So we could have a field:
> ompi_build_scratch = &scratch(1)
> Or, heck, it doesn't even need to be a function of --scratch at
> all. "ompi_build_scratch = <foo>" alone could be good enough.
> On Sep 19, 2008, at 12:43 PM, Tim Mattox wrote:
>> I've also been thinking about this a bit more, and although
>> having the name match the INI section name has some appeal,
>> I ultimately think the best name is: --mpi-build-scratch, since
>> that is what it does. As Ethan mentioned, the actual MPI
>> install goes into --scratch. And on the other side of it,
>> the MPI Get also goes into --scratch. The --mpi-build scratch
>> is only used for untaring/copying the MPI source tree, running
>> config, make, and make check. The actual "make install"
>> simply copies the binaries from --mpi-build-scratch into --scratch.
>> As for names like local-scratch or fast-scratch, they don't convey
>> what it's used for, so should it be fast-for-big-files, of fast-for-
>> Or similarly, "local" to my cluster, my node, or what?
>> I think mpi-build-scratch conveys the most useful meaning, since you
>> should pick a filesystem that is tuned (or at least not horrible) for
>> doing configure/make.
>> Unfortunately, I won't have time today to get the patch adjusted
>> and into svn. Maybe on Monday.
>> On Fri, Sep 19, 2008 at 11:23 AM, Ethan Mallove <ethan.mallove_at_[hidden]
>> > wrote:
>>> On Thu, Sep/18/2008 05:35:13PM, Jeff Squyres wrote:
>>>> On Sep 18, 2008, at 10:45 AM, Ethan Mallove wrote:
>>>>>> Ah, yeah, ok, now I see why you wouldl call it --mpi-install-
>>>>>> scratch, so
>>>>>> that it matches the MTT ini section name. Sure, that works for
>>>>> Since this does seem like a feature that should eventually
>>>>> propogate to all the other phases (except for Test run),
>>>>> what will we call the option to group all the fast phase
>>>> Seriously, *if* we ever implement the other per-phase scratches,
>>>> I think
>>>> having one overall --scratch and fine-grained per-phase
>>>> fine. I don't think we need to go overboard to have a way to say
>>>> I want
>>>> phases X, Y, and Z to use scratch A. Meaning that you could just
>>>> --X-scratch=A --Y-scratch=A and --Z-scratch=A.
>>> --mpi-install-scratch actually has MTT install (using
>>> DESTDIR) into --scratch. Is that confusing? Though
>>> --fast-scratch could also be misleading, as I could see a
>>> user thinking that --fast-scratch will do some magical
>>> optimization to make their NFS directory go faster. I guess
>>> I'm done splitting hairs over --mpi-install-scratch :-)
>>>> Jeff Squyres
>>>> Cisco Systems
>>>> mtt-users mailing list
>>> mtt-users mailing list
>> Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
>> tmattox_at_[hidden] || timattox_at_[hidden]
>> I'm a bright... http://www.the-brights.net/
>> mtt-users mailing list
> Jeff Squyres
> Cisco Systems
> mtt-users mailing list