Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Devel mailing list

Subject: Re: [MTT devel] MTToGDS
From: Igor Ivanov (igor.ivanov_at_[hidden])
Date: 2010-02-10 04:12:24


Hi Jeff,

Some of points were touched by Mike in other mail.
You can find my answers below marked as [II].

Regards,
Igor

Jeff Squyres wrote:
On Feb 5, 2010, at 4:56 AM, Igor Ivanov wrote:

  
Thank you to start playing with one. I hope you find it is useful.
I am trying to answer questions you raised. 
    

Thanks!  Sorry for the delay in my answering -- got caught up in other stuff...  Ugh!

  
1. Yes, you are correct. The implementation uses google account authorization way to access web page only. Client applications use separate approach to communicate with datastore.
It is difficult to say what way is better from my point of view. In both ways we need to manage list of valid accounts to answer "is this username/password combo valid?" (Google does not do this task instead of us) and send username/password information from a client to application. Visible preference could exist in case web usage that was not main goal.
    

Gotcha.

FWIW, I think it would be (slightly) easier if we don't have to manage users' passwords on the appspot.  If the MTT client can just submit using a regular google account username+password, that would be one less thing to have to manage.  I guess I'm a little burned out from our current MTT setup where people had to bug me to reset their passwords (in a local .htaccess file) whenever they lost/forgot them.  :-)

All things being equal, you're right, of course -- a) we still have to maintain a list of google accounts who are allowed to submit/access/whatever, b) we still have to ship off a username/password combo and ask if it's valid.  But eliminating that password column from our data, IMHO, represents pushing off all account management to Google.  

Is it hard to redirect the appspot lookup to use google account names + passwords?
  
[II] I believe that it is possible task. It could be done in two ways:
set google account e-mail in mttdatabase_username key of ini-file
1) provide for filling User.username with google account e-mail and change code of User.check_password in file  gds/auth/models.py to with google account verification code
code example (I have not checked one):
        request_body = urllib.urlencode({'Email': username,
                                         'Passwd': raw_password,
                                         'accountType': 'HOSTED_OR_GOOGLE',
                                         'service': 'ah',
                                         'source': 'test'})
        auth_response = urlfetch.fetch('https://www.google.com/accounts/ClientLogin',
                                       method=urlfetch.POST,
                                       headers={'Content-type':'application/x-www-form-urlencoded',
                                                'Content-Length':
                                                 str(len(request_body))},
                                       payload=request_body)
        auth_response.status_code
2) provide for filling User.emailwith google account e-mail and modify authenticate code in auth/__init__.py with query by User.email and implement google account verification code.
Keep in mind performance difference between google account verification code and local verification!
  
2. Current implementation of datastore environment is oriented on client usage way mostly and does not grant users rich web possibility. Existing web form can be considered as instrument for an administrator now.
    

Gotcha.  Someday someone with lots of time can write a glitzy web 2.0 interface.  ;-)

  
There is special command line utility that allows to communicate with datastore as bquery.pl located at <mtt root>/src/client. It is able to do query data from datastore and view different information on console using extended (more close to sql) gql syntax that is implemented for users comfort. More detail information can be got from document as http://svn.open-mpi.org/svn/mtt/trunk/docs/gds/VBench_bquery.doc

For example: 
to get information related mpi install following command line can be used

$ ./bquery.pl --username=<username> --password=<password> --server=http://<appid>.appspot.com
--view --gqls="select description, mpi_path from MpiInstallPhase where duration=1" --format=txt

description                          mpi_path
----------------------------------   ----------------
Voltaire already installed MPI+OMA   /opt/openmpi/1.3
...
    

Nifty -- I'll go play with this...

  
3. In case we can collect all needed information about cluster using transparent way we should do it. ClusterInfo.pm is attempt to get info in one place in clear manner.
    

I ask because many of the assumptions in ClusterInfo.pm are not valid for my cluster.

  
4. You are right it can be done.
    

If you don't care, and since I'm the one making such an annoying request, I'll be happy to do the work for this one.  :-)

  
5. Results are cached to keep link information between "test build" ->"mpi install"; "test run"->"test build" ->"mpi install" phases.
    

Ah -- I see.  In the SQL submitter, when we submit each phase, we get an ID back to use for the next linked phase (e.g., the mpi install submit returns an ID that is used with a corresponding test build submit, etc.).  Is that not possible here?  I.e., can a submit return an ID to be used with the next submit?

I ask for two reasons:

1. When running a huge number of tests in MTT (like I do), it is useful to see the results start appearing in the database gradually over time rather than having to wait (potentially) many hours for all the results to appear at once.

2. I actually run OMPI testing in two phases at Cisco:

   a. (mpi get + mpi install + test get + test build) for ~25 different mpi install sections
   b. as each one of those finish, launch test run phases for each, with either ~10 or ~25 mpi details variants (depending on the specific mpi install)

   Specifically, I execute each of my test_run phases separately from all the other phases (because I have lots of them running in parallel for a given mpi install).  Hence, the test run phase needs to be able to run long after all the other phase results were submitted.

I believe IU and Sun do similar things (although our MTT setups are quite different from each other, I think we have all separated the get/install/get/build stuff from test runs).

  
6. Could you send detail info about the issue (ini-file, mtt.log with verbose info and command line), we will look on that.
    

Let me reproduce and simplify; I was using a fairly complex ini file...  

Thanks!

  


__________ Information from ESET NOD32 Antivirus, version of virus signature database 4852 (20100209) __________

The message was checked by ESET NOD32 Antivirus.

http://www.esetnod32.ru