The current BTL/TCP and OOB/TCP code contains separate sockets for IPv4
and IPv6. Though it has never been a problem for me, this might cause an
out-of-FDs-error in large clusters. (IIRC, rhc has already pointed out
A possible way to reduce FD consumption would be the use of IPv4 mapped
IPv6 addresses. These addresses let one use a single AF_INET6 socket for
both, IPv4 and IPv6.
One year ago, I've chosen not to employ these addresses for mainly two
- Windows XP doesn't support them
- OpenBSD has disabled them, but the system administrator can enable
them at runtime
These limitions are also mentioned here:
Nowadays, Vista (and the Windows Server line) has support for
IPv4-mapped IPv6 addresses.
If disabled on OpenBSD systems, the code wouldn't be able to do IPv4,
but as already mentioned, the admin could easily fix this.
Should we consider moving towards these mapped addresses? The
- less code, only one socket to handle
- better FD consumption
- breaks WinXP support, but not Vista/Longhorn or later
- requires non-default kernel runtime setting on OpenBSD for IPv4
FWIW, FD consumption is the only real issue to consider.
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany