From: Jeff Squyres \(jsquyres\) (jsquyres_at_[hidden])
Date: 2006-07-12 12:12:51


> -----Original Message-----
> From: mtt-users-bounces_at_[hidden]
> [mailto:mtt-users-bounces_at_[hidden]] On Behalf Of Ethan Mallove
> Sent: Wednesday, July 12, 2006 10:38 AM
> To: mtt-users_at_[hidden]
> Subject: Re: [MTT users] [Fwd: [perfbase-users] Submitting a
> run in multiplesteps]
>
> Jeff Squyres (jsquyres) wrote On 07/10/06 17:48,:
> > I think the latter is what we'll likely do; our goal is to
> see partial
> > results (e.g., not have to wait 10 hours to see that every
> single test
> > failed) rather than be able to submit lots of results at once.
>
> To see partial results, I have this in my .ini file:
>
> ---
> [Reporter: IU database]
> module = Perfbase
>
> perfbase_realm = OMPI
> perfbase_username = postgres
> ... [snip] ...
> perfbase_debug_filename = pb_debug
> ---
>
> I can then look at the pb_debug* files to see the results as
> they're happening. They're not pretty
> results in tabular or graphical format, but could these raw
> results suffice for most situations?

They could suffice for some, I suppose. Note that I implemented the
debug_filename option as a "either dump to a file *or* submit back to
perfbase" kind of thing. So if you have the debug_filename, it won't
submit back to perfbase. That's easy enough to change, of course (i.e.,
there's no technical reason preventing both from happening), but that's
the way it is right now.

But also keep in mind that this will only dump out output at the same
time that we would have reported back to perfbase. So the timing of the
results will be about the same.

The code currently reports back to perfbase after each test execution
(as opposed to after the entire suite), so our granularity for "real
time results" is quite good right now. I suspect that we'll want to do
a blocking mechanism before going production (e.g., collect N results at
a time and submit them all at once so that we don't hammer on the IU
server). That should also not be difficult to do.

> E.g., you can get a pretty good idea of how the tests are
> going by just doing:
>
> $ grep -E 'test_pass|test_name' pb_debug*
> pb_debug.0.txt:test_name: c_hello
> pb_debug.0.txt:test_pass: 1
> pb_debug.1.txt:test_name: cxx_hello
> pb_debug.1.txt:test_pass: 1
> pb_debug.10.txt:test_name: MPI_Barrier_c
> pb_debug.10.txt:test_pass: 1
> pb_debug.11.txt:test_name: MPI_Bcast_c
> pb_debug.11.txt:test_pass: 1
> ... [snip] ...
>
> I think though, that the Perbase.pm code needs to be adjusted
> to allow for results to go to both the
> perfbase debug files _and_ to IU's perfbase simultaneously
> (right now there's an if-else preventing
> this).

Yep -- you hit the nail on the head. No reason it can't do both; I just
coded it up that way, well, *because*. :-)
 

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems