Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres \(jsquyres\) (jsquyres_at_[hidden])
Date: 2006-04-20 19:35:27


> -----Original Message-----
> From: users-bounces_at_[hidden]
> [mailto:users-bounces_at_[hidden]] On Behalf Of Bogdan Costescu
> Sent: Thursday, April 20, 2006 10:32 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Open-MPI and TCP port range
>
> On Thu, 20 Apr 2006, Jeff Squyres (jsquyres) wrote:
>
> > Right now, there is no way to restrict the port range that Open MPI
> > will use. ... If this becomes a problem for you (i.e., the random
> > MPI-chose-the-same-port-as-your-app events happen a lot), let us
> > know and we can probably put in some controls to work around this.
>
> I would welcome a discussion about this; on the LAM/MPI lists several
> people asked for a limited port range to allow them to pass through
> firewalls or to do tunneling.

Recall that we didn't end up doing this in LAM because limiting the port
ranges is not necessarily sufficient to allow you to run parallel
computing spanning firewalls. The easiest solution is to have a single
routing entity that can be exposed publicly (in front of the firewall,
either virtually or physically) that understands MPI -- so that MPI
processes outside the firewall can send to this entity and it routes the
messages to the appropriate back-end MPI process. This routable entity
does not exist for LAM (*), and does not yet exist for Open MPI (there
have been discussions about creating it, but nothing has been done about
it).

(*) Disclaimer: the run-time environment for LAM actually does support
this kind of routing, but we stopped actively maintaining it years ago
-- it may or may not work at the MPI layer.

Other scenarios are also possible, two of which include:

1. make a virtual public IP address in front of the firewall for each
back-end node. MPI processes who send data to the public IP address
will be routed [by the firewall] back to the back-end node.

2. use a single virtual public IP address in front of the firewall with
N ports open. MPI processes who send data to the public IP address
dispatch to the back-end node [by the firewall] based on the port
number.

Both of these require opening a bunch of holes in the firewall which is
at least somewhat unattractive.

So probably the best solution is to have an MPI-level routable entity
that can do this stuff. Then you only need one public IP address and
potentially a small number of ports opened.

That being said, we are not opposed to putting port number controls in
Open MPI. Especially if it really is a problem for someone, not just a
hypothetical problem ;-). But such controls should not be added to
support firewalled operations, because -- at a minimum -- unless you do
a bunch of other firewall configuration, it will not be enough.

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems