Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Users mailing list

From: Josh Hursey (jjhursey_at_[hidden])
Date: 2006-09-28 14:50:43


Finally getting a chance to try this out.

I am trying to use the script as perscribed on the webpage and am
getting some errors apparently from the 'eval $buf;' on line 39:
mmmmmmmmmmmmmmmmmmmmm
mpiteam_at_BigRed:> ./local/bin/post-mtt-results.pl -d -f bigred-
Test_Build-trivial-ompi-nightly-trunk-1.3a1r11860.txt
Number found where operator expected at (eval 8) line 3, near "Linux
2.6.5"
         (Do you need to predeclare Linux?)
Number found where operator expected at (eval 8) line 6, near "28 14"
         (Missing operator before 14?)
Number found where operator expected at (eval 8) line 6, near "30 2006"
         (Missing operator before 2006?)
Bareword found where operator expected at (eval 8) line 7, near "2006
submit_test_timestamp"
         (Missing operator before submit_test_timestamp?)
Number found where operator expected at (eval 8) line 7, near "28 14"
         (Missing operator before 14?)
Number found where operator expected at (eval 8) line 7, near "40 2006"
         (Missing operator before 2006?)
Bareword found where operator expected at (eval 8) line 8, near "2006
submitting_local_username"
         (Missing operator before submitting_local_username?)
Bareword found where operator expected at (eval 8) line 11, near "3.3.3
mpi_get_section_name"
         (Missing operator before mpi_get_section_name?)
Bareword found where operator expected at (eval 8) line 14, near
"1.3a1r11860"
         (Missing operator before a1r11860?)
Bareword found where operator expected at (eval 8) line 16, near "6
perfbase_xml"
         (Missing operator before perfbase_xml?)
Number found where operator expected at (eval 8) line 19, near "28 14"
         (Missing operator before 14?)
Number found where operator expected at (eval 8) line 19, near "37 2006"
         (Missing operator before 2006?)
Bareword found where operator expected at (eval 8) line 20, near "2006
success"
         (Missing operator before success?)
Bareword found where operator expected at (eval 8) line 21, near "1
test_build_section_name"
         (Missing operator before test_build_section_name?)
Bareword found where operator expected at (eval 8) line 22, near "3
seconds"
         (Missing operator before seconds?)
posting the following:
hostname: s9c4b2
os_name: Linux
os_version: Linux 2.6.5-7.276-pseries64
platform_hardware: ppc64
platform_type: linux-sles9-ppc64
start_run_timestamp: Thu Sep 28 14:56:30 2006
submit_test_timestamp: Thu Sep 28 14:56:40 2006
submitting_local_username: mpiteam
compiler_name: gnu
compiler_version: 3.3.3
mpi_get_section_name: ompi-nightly-trunk
mpi_install_section_name: bigred gcc warnings
mpi_name: ompi-nightly-trunk
mpi_version: 1.3a1r11860
mtt_version_minor: 6
perfbase_xml: inp_test_build.xml
phase: Test Build
result_message: Success
start_test_timestamp: Thu Sep 28 14:56:37 2006
success: 1
test_build_section_name: trivial
test_duration_interval: 3 seconds
to http://www.open-mpi.org/mtt/submit/index.php
Need a field name at (eval 10) line 1
mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm

On Sep 26, 2006, at 6:26 PM, Ethan Mallove wrote:

> I've posted the helper script here:
>
> http://svn.open-mpi.org/trac/mtt/wiki/SubmitHelper
>
> Let me know how it works.
>
> -Ethan
>
>
> On Tue, Sep/26/2006 04:06:01PM, Jeff Squyres wrote:
>> For the moment, that might be sufficient.
>>
>> What HLRS does is open ssh tunnels back to the head node and then
>> http put's
>> through those back to the IU database. Icky, but it works.
>>
>> The problem is that there are some other higher-priority items
>> that we need
>> to get done in MTT (performane measurements, for example) that,
>> since there
>> are [icky] workarounds for http puts, we put the whole "disconnected
>> scenario" stuff at a lower priority. :-(
>>
>>
>> On 9/26/06 3:51 PM, "Ethan Mallove" <ethan.mallove_at_[hidden]> wrote:
>>
>>> I have an unpretty solution that maybe could serve as a
>>> stopgap between now and when we implement the "disconnected
>>> scenarios" feature. I have a very simple and easy-to-use
>>> perl script that just HTTP POSTs a debug file (what *would*
>>> have gone to the database). E.g.,
>>>
>>> $ ./poster.pl -f 'mttdatabase_debug*.txt'
>>>
>>> (Where mttdatabase_debug would be what you supply to the
>>> mttdatabase_debug_filename ini param in the "IU Database"
>>> section.)
>>>
>>> I think this would fill in your missing * step below.
>>>
>>> Does that sound okay, Jeff?
>>>
>>> -Ethan
>>>
>>>
>>> On Tue, Sep/26/2006 03:25:08PM, Josh Hursey wrote:
>>>> So the login node is the only one that has a window to the outside
>>>> world. I can't access the outside world from within an allocation.
>>>>
>>>> So my script does:
>>>> - Login Node:
>>>> 1) Get MPI Tarballs
>>>> - 1 Compute node:
>>>> 0) Allocate a compute node to compile.
>>>> 1) Build/Install MPI builds
>>>> 2) Deallocate compute node
>>>> - Login Node:
>>>> 1) Get MPI Test sources
>>>> - N Compute Nodes:
>>>> 0) Allocate N compute Nodes to run the tests on
>>>> 1) Build/Install Tests
>>>> 2) Run the tests...
>>>> - Login Node:
>>>> 0) Check to make sure we are all done (scheduler didn't kill
>>>> the
>>>> job, etc.).
>>>> 1) Report results to MTT *
>>>>
>>>> * This is what I am missing currently.
>>>>
>>>> I currently have the "Reporter: IU Database" section commented
>>>> out so
>>>> that once the tests finish they don't try to post the database,
>>>> since
>>>> they can't see the outside world.
>>>>
>>>> On Sep 26, 2006, at 3:17 PM, Ethan Mallove wrote:
>>>>
>>>>> On Tue, Sep/26/2006 02:01:41PM, Josh Hursey wrote:
>>>>>> I'm setting up MTT on BigRed at IU, and due to some visibility
>>>>>> requirements of the compute nodes I segment the MTT operations.
>>>>>> Currently I have a perl script that does all the svn and wget
>>>>>> interactions from the login node, then compiles and runs on the
>>>>>> compute nodes. This all seems to work fine.
>>>>>>
>>>>>> Now I am wondering how to get the textfile results that were
>>>>>> generated back to the MTT database once the run has finished.
>>>>>>
>>>>>
>>>>> If you run the "MPI Install", "Test build", and "Test run"
>>>>> sections from the same machine (call it the
>>>>> "Install-Build-Run" node), I would think you could then
>>>>> additionaly run the "Reporter: IU Database" section. Or can
>>>>> you not do the HTTP POST from Install-Build-Run node?
>>>>>
>>>>> -Ethan
>>>>>
>>>>>> I know HLRS deals with this situation, is there a supported
>>>>>> way of
>>>>>> doing this yet or is it a future work item still?
>>>>>>
>>>>>> Currently I have a method to send a summary email to our team
>>>>>> after
>>>>>> the results are generated, so this isn't a show stopper for IU or
>>>>>> anything, just something so we can share our results with the
>>>>>> rest of
>>>>>> the team.
>>>>>>
>>>>>> Cheers,
>>>>>> Josh
>>>>>> _______________________________________________
>>>>>> mtt-users mailing list
>>>>>> mtt-users_at_[hidden]
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>>>>
>>>> _______________________________________________
>>>> mtt-users mailing list
>>>> mtt-users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>>
>>
>> --
>> Jeff Squyres
>> Server Virtualization Business Unit
>> Cisco Systems

----
Josh Hursey
jjhursey_at_[hidden]
http://www.open-mpi.org/