From: Ethan Mallove (ethan.mallove_at_[hidden])
Date: 2006-09-26 18:26:40


I've posted the helper script here:

http://svn.open-mpi.org/trac/mtt/wiki/SubmitHelper

Let me know how it works.

-Ethan

On Tue, Sep/26/2006 04:06:01PM, Jeff Squyres wrote:
> For the moment, that might be sufficient.
>
> What HLRS does is open ssh tunnels back to the head node and then http put's
> through those back to the IU database. Icky, but it works.
>
> The problem is that there are some other higher-priority items that we need
> to get done in MTT (performane measurements, for example) that, since there
> are [icky] workarounds for http puts, we put the whole "disconnected
> scenario" stuff at a lower priority. :-(
>
>
> On 9/26/06 3:51 PM, "Ethan Mallove" <ethan.mallove_at_[hidden]> wrote:
>
> > I have an unpretty solution that maybe could serve as a
> > stopgap between now and when we implement the "disconnected
> > scenarios" feature. I have a very simple and easy-to-use
> > perl script that just HTTP POSTs a debug file (what *would*
> > have gone to the database). E.g.,
> >
> > $ ./poster.pl -f 'mttdatabase_debug*.txt'
> >
> > (Where mttdatabase_debug would be what you supply to the
> > mttdatabase_debug_filename ini param in the "IU Database"
> > section.)
> >
> > I think this would fill in your missing * step below.
> >
> > Does that sound okay, Jeff?
> >
> > -Ethan
> >
> >
> > On Tue, Sep/26/2006 03:25:08PM, Josh Hursey wrote:
> >> So the login node is the only one that has a window to the outside
> >> world. I can't access the outside world from within an allocation.
> >>
> >> So my script does:
> >> - Login Node:
> >> 1) Get MPI Tarballs
> >> - 1 Compute node:
> >> 0) Allocate a compute node to compile.
> >> 1) Build/Install MPI builds
> >> 2) Deallocate compute node
> >> - Login Node:
> >> 1) Get MPI Test sources
> >> - N Compute Nodes:
> >> 0) Allocate N compute Nodes to run the tests on
> >> 1) Build/Install Tests
> >> 2) Run the tests...
> >> - Login Node:
> >> 0) Check to make sure we are all done (scheduler didn't kill the
> >> job, etc.).
> >> 1) Report results to MTT *
> >>
> >> * This is what I am missing currently.
> >>
> >> I currently have the "Reporter: IU Database" section commented out so
> >> that once the tests finish they don't try to post the database, since
> >> they can't see the outside world.
> >>
> >> On Sep 26, 2006, at 3:17 PM, Ethan Mallove wrote:
> >>
> >>> On Tue, Sep/26/2006 02:01:41PM, Josh Hursey wrote:
> >>>> I'm setting up MTT on BigRed at IU, and due to some visibility
> >>>> requirements of the compute nodes I segment the MTT operations.
> >>>> Currently I have a perl script that does all the svn and wget
> >>>> interactions from the login node, then compiles and runs on the
> >>>> compute nodes. This all seems to work fine.
> >>>>
> >>>> Now I am wondering how to get the textfile results that were
> >>>> generated back to the MTT database once the run has finished.
> >>>>
> >>>
> >>> If you run the "MPI Install", "Test build", and "Test run"
> >>> sections from the same machine (call it the
> >>> "Install-Build-Run" node), I would think you could then
> >>> additionaly run the "Reporter: IU Database" section. Or can
> >>> you not do the HTTP POST from Install-Build-Run node?
> >>>
> >>> -Ethan
> >>>
> >>>> I know HLRS deals with this situation, is there a supported way of
> >>>> doing this yet or is it a future work item still?
> >>>>
> >>>> Currently I have a method to send a summary email to our team after
> >>>> the results are generated, so this isn't a show stopper for IU or
> >>>> anything, just something so we can share our results with the rest of
> >>>> the team.
> >>>>
> >>>> Cheers,
> >>>> Josh
> >>>> _______________________________________________
> >>>> mtt-users mailing list
> >>>> mtt-users_at_[hidden]
> >>>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
> >>
> >> _______________________________________________
> >> mtt-users mailing list
> >> mtt-users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems