We have some clusters which consist of a large pool of 8-way nodes
connected via ethernet. On these particular machines, we'd like our
users to be able to run 8-way MPI jobs on node, but we *don't* want them
to run MPI jobs across nodes via the ethernet. Thus, I'd like to
configure and build OpenMPI to provide shared memory support (or TCP
loopback) but disable general TCP support.
I realize that you can run without tcp via something like "mpirun --mca
btl ^tcp", but this is up to the user's discretion. I need a way to
disable it systematically. Is there a way to configure it out at build
time or is there some runtime configuration file I can modify to turn it
off? Also, when we configure "--without-tcp", the configure script
doesn't complain, but TCP support is added anyway.
MPI Support @ LLNL