PS: I am totally new to MPI internals. So if at all we decide to go ahead with the project, I would be regular bugger in the list.
----- Original Message ----
From: Adrian Knoth <firstname.lastname@example.org>
To: Open MPI Developers <email@example.com>
Sent: Thursday, January 10, 2008 1:24:01 AM
Subject: Re: [OMPI devel] btl tcp port to xensocket
On Tue, Jan 08, 2008 at 10:51:45PM -0800, Muhammad Atif wrote:
> I am planning to port tcp component to xensocket, which is a fast
> interdomain communication mechanism for guest domains in Xen. I may
Just to get things right: You first partition your SMP/Multicore system
with Xen, and then want to re-combine it later for MPI communication?
Wouldn't it be easier to leave the unpartitioned host alone and use
shared memory communication instead?
> As per design, and the fact that these sockets are not normal
> I have to pass certain information (basically memory references,
> domain info etc) to other peers once sockets have been created. I
There's ORTE, the runtime environment. It employs OOB/tcp to have a so
called out-of-band channel. ORTE also provides a general purpose
Once a TCP connection between the headnode process and all other peers
is established, you can store your required information in the GPR.
> understand that mca_pml_base_modex_send and recv (or simply using
> mca_btl_tcp_component_exchange) can be used to exchange information,
Use mca_pml_base_modex_send (now ompi_modex_send) and encode your
required information. It's getting stored in the GPR. Read it back with
mca_pml_base_modex_recv (ompi_modex_recv), as it is done in
mca_btl_tcp_component_exchange and mca_btl_tcp_proc_create.
> but I cannot seem to get them to communicate. So to put my question
> a very simple way..... I want to create a socket structure containing
> necessary information, and then pass it to all other peers before
> start of actual mpi communication. What is the easiest way to do it.
Quite the same way. mca_btl_tcp_component_exchange assembles the
required information and stores it in the GPR by calling
mca_btl_tcp_proc_create (think of "the other peers") reads this
information into local context.
I guess you might want to copy btl/tcp to let's say btl/xen, so you can
modify internal structures, if required. Perhaps xensockets don't need
IP addresses, as they are actually memory sockets.
However, you'll still need TCP communication between Xen guests for the
As mentioned above, I'm not sure if it's reasonable to use Xen and MPI
at all. Virtualization overhead might decrease your performance, and
usually the last thing you want to have when using MPI ;)
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universitšt Jena, Germany
devel mailing firstname.lastname@example.org://www.open-mpi.org/mailman/listinfo.cgi/devel