Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] dinamic spawn process on remote node
From: Reuti (reuti_at_[hidden])
Date: 2010-10-22 08:07:19


Hi,

Am 22.10.2010 um 10:58 schrieb Vasiliy G Tolstov:

> Hello. May be this question already answered, but i can't see it in list
> archive.
>
> I'm running about 60 Xen nodes with about 7-20 virtual machines under
> it. I want to gather disk,cpu,memory,network utilisation from virtual
> machines and get it into database for later processing.
>
> As i see, my architecture like this - One or two master servers with mpi
> process with rank 0, that can insert data into database. This master
> servers spawns on each Xen node mpi process, that gather statistics from
> virtual machines on that node and send it to masters (may be with
> multicast request). On each virtual machine i have process (mpi) that
> can get and send data to mpi process on each Xen node. Virtual machine
> have ability to migrate on other Xen node....

do you want just to monitor the physical and virtual machines by an application running under MPI? It sounds like it could be done by Ganglia or Nagios then.

-- Reuti

> Please, Can You help me with architecture of this system (is my thoughts
> right) ?
> And one more qeustion - that is the best way, to attach mpi process to
> already running group? (for example, virtual machine is rebooted, or may
> be Xen node rebooted)....
>
> Than You for any answers...
>
> --
> Vasiliy G Tolstov <v.tolstov_at_[hidden]>
> Selfip.Ru
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users