Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] dinamic spawn process on remote node
From: Vasiliy G Tolstov (v.tolstov_at_[hidden])
Date: 2010-10-22 10:06:19


On Fri, 2010-10-22 at 16:04 +0200, Reuti wrote:
> Am 22.10.2010 um 14:09 schrieb Vasiliy G Tolstov:
>
> > On Fri, 2010-10-22 at 14:07 +0200, Reuti wrote:
> >> Hi,
> >>
> >> Am 22.10.2010 um 10:58 schrieb Vasiliy G Tolstov:
> >>
> >>> Hello. May be this question already answered, but i can't see it in list
> >>> archive.
> >>>
> >>> I'm running about 60 Xen nodes with about 7-20 virtual machines under
> >>> it. I want to gather disk,cpu,memory,network utilisation from virtual
> >>> machines and get it into database for later processing.
> >>>
> >>> As i see, my architecture like this - One or two master servers with mpi
> >>> process with rank 0, that can insert data into database. This master
> >>> servers spawns on each Xen node mpi process, that gather statistics from
> >>> virtual machines on that node and send it to masters (may be with
> >>> multicast request). On each virtual machine i have process (mpi) that
> >>> can get and send data to mpi process on each Xen node. Virtual machine
> >>> have ability to migrate on other Xen node....
> >>
> >> do you want just to monitor the physical and virtual machines by an application running under MPI? It sounds like it could be done by Ganglia or Nagios then.
> >
> > No.. I want to get realtime data to decide what virtual machine i need
> > to migrate to other Xen, becouse it need more resources.
>
> This is indeed an interesting field, as it was a couple of times also on the SGE Gridengine mailing list: how to handle jobs with varying resource requests over their lifetime, and how should they signal it (or provide it already in the `qsub` command) to the queuing system, that they now have to move to another bigger node (or could be moved to a smaller node with less resources).
>
> -- Reuti

Very interesting. Thank You for suggestion.

-- 
Vasiliy G Tolstov <v.tolstov_at_[hidden]>
Selfip.Ru