Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Open MPI task scheduler
From: jody (jody.xha_at_[hidden])
Date: 2010-06-21 02:41:23


Hi

I think your problem can be solved easily on the MPI level.
Just hav you manager execute a loop in which it waits for any message.
Define different message types by their MPI-tags. Once a message
has been received, decide what to do by looking at the tag.

Here i assume that a worker with no job sends a message with the tag
TAG_TASK_REQUEST and then waits to receive a message from the master
with either a new task or the command to exit.
Once a worker has finished a tsk it sends a message with the tag TAG_RESULT,
and then sends a message containing the result.
Here i assume that new tasks can be sent from a different node by using
the tag TAG_NEW_TASK.

The main loop in the Master would be:

while (more_tasks) {
     MPI_Recv(&a, MPI_INT, 1, MPI_ANY_SOURCE, MPI_ANY_TAG, &st);
     switch (st.MPI_TAG) {
       case TAG_TASK_REQUEST:
         sendNextTask(st.MPI_SOURCE);
         break;
      case TAG_RESULT:
         collectResult(st.MPI_SOURCE);
         break;
      case TAG_NEW_TASK:
         putNewTaskOnQueue(st.MPI_SOURCE);
         break;
   }
}

In a worker:

  while (go_on) {
     MPI_Send(a, MPI_INT, 1, idMaster, TAG_TASK_REQUEST);
     MPI_Recv(&TaskDef, TaskType, 1, idMaster, MPI_ANY_TAG, &st);
     if (st.MPI_TAG == TAG_STOP) {
       go_on=false;
     } else {
       result=workOnTask(TaskDef, TaskLen);
       MPI_Send(a, MPI_INT, 1, idMaster, TAG_RESULT);
       MPI_Send(result, resultType, 1, idMaster, TAG_RESULT_CONTENT);
  }
}

I hope this helps
  Jody

On Mon, Jun 21, 2010 at 12:17 AM, Jack Bryan <dtustudy68_at_[hidden]> wrote:
> Hi,
> thank you very much for your help.
> What is the meaning of " must find a system so that every task can be
> serialized in the same form." What is the meaning of "serize " ?
> I have no experience of programming with python and XML.
> I have studied your blog.
> Where can I find a simple example to use the techniques you have said ?
> For exmple, I have 5 task (print "hello world !").
> I want to use 6 processors to do it in parallel.
> One processr is the manager node who distributes tasks and other 5
> processors
> do the printing jobs and when they are done, they tell this to the manager
> noitde.
>
> Boost.Asio is a cross-platform C++ library for network and low-level I/O
> programming. I have no experiences of using it. Will it take a long time to
> learn
> how to use it ?
> If the messages are transferred by SOAP+TCP, how the manager node calls it
> and push task into it ?
> Do I need to install SOAP+TCP on my cluster so that I can use it ?
>
> Any help is appreciated.
> Jack
> June 20  2010
>> Date: Sun, 20 Jun 2010 21:00:06 +0200
>> From: matthieu.brucher_at_[hidden]
>> To: users_at_[hidden]
>> Subject: Re: [OMPI users] Open MPI task scheduler
>>
>> 2010/6/20 Jack Bryan <dtustudy68_at_[hidden]>:
>> > Hi, Matthieu:
>> > Thanks for your help.
>> > Most of your ideas show that what I want to do.
>> > My scheduler should be able to be called from any C++ program, which can
>> > put
>> > a list of tasks to the scheduler and then the scheduler distributes the
>> > tasks to other client nodes.
>> > It may work like in this way:
>> > while(still tasks available) {
>> > myScheduler.push(tasks);
>> > myScheduler.get(tasks results from client nodes);
>> > }
>>
>> Exactly. In your case, you want only one server, so you must find a
>> system so that every task can be serialized in the same form. The
>> easiest way to do so is to serialize your parameter set as an XML
>> fragment and add the type of task as another field.
>>
>> > My cluster has 400 nodes with Open MPI. The tasks should be transferred
>> > b y
>> > MPI protocol.
>>
>> No, they should not ;) MPI can be used, but it is not the easiest way
>> to do so. You still have to serialize your ticket, and you have to use
>> some functions that are from MPI2 (so perhaps not as portable as MPI1
>> functions). Besides, it cannot be used from programs that do not know
>> of using MPI protocols.
>>
>> > I am not familiar with  RPC Protocol.
>>
>> RPC is not a protocol per se. SOAP is. RPC stands for Remote Procedure
>> Call. It is basically your scheduler that has several functions
>> clients can call:
>> - add tickets
>> - retrieve ticket
>> - ticket is done
>>
>> > If I use Boost.ASIO and some Python/GCCXML script to generate the code,
>> > it
>> > can be
>> > called from C++ program on Open MPI cluster ?
>>
>> Yes, SOAP is just an XML way of representing the fact that you call a
>> function on the server. You can use it with C++, Java, ... I use it
>> with Python to monitor how many tasks are remaining, for instance.
>>
>> > I cannot find the skeletton on your blog.
>> > Would you please tell me where to find it ?
>>
>> It's not complete as some of the work is property of my employer. This
>> is how I use GCCXML to generate the calling code:
>>
>> http://matt.eifelle.com/2009/07/21/using-gccxml-to-automate-c-wrappers-creation/
>> You have some additional code to write, but this is the main idea.
>>
>> > I really appreciate your help.
>>
>> No sweat, I hope I can give you correct hints!
>>
>> Matthieu
>> --
>> Information System Engineer, Ph.D.
>> Blog: http://matt.eifelle.com
>> LinkedIn: http://www.linkedin.com/in/matthieubrucher
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ________________________________
> Hotmail is redefining busy with tools for the New Busy. Get more from your
> inbox. See how.
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>