Thanks for your reply, Jeff.
> > It never occurred to me that the headnode would try to communicate
> > with the slave using infiniband interfaces... Orthogonally, what are
> The problem here is that since your IB IP addresses are
> "public" (meaning that they're not in the IETF defined ranges for
> private IP addresses), Open MPI assumes that they can be used to
> communicate with your back-end nodes on the IPoIB network. See this
> FAQ entry for details:
The pointer was rather informative. We do have to use non-standard
ranges for IB interfaces, because we're performing automatic IP over
IB configuration based on the eth0 IP and netmask. Given 10.x.y.z/8
configuration for eth0, the IPs assigned to infiniband interfaces will
not only end up on the same subnet ID, but may even conflict with
existing ethernet interface IP addresses. Hence the use of 20.x.y.z
and 30.x.y.z for ib0 & ib1 respectively.
> > the industry standard OpenMPI benchmark tests I could run to perform a
> > real test?
> Just about anything will work -- NetPIPE, the Intel Benchmarks, ...etc.
I actually tried benchmarking with HPLinpack. Specifically, I'm
interested in measuring performance improvements when running OpenMPI
jobs over several available interconnects. However, I have difficulty
interpreting the cryptic HPL output. I've seen members of the list
using xhpl benchmark. Perhaps someone could shed some light on how to
read its output? Also, my understanding is that the only advantage of
multiple interconnect availability is the increased bandwidth for
OpenMPI message striping - correct? In that case, I would probably
benefit from a more bandwidth intensive benchmark. If the OpenMPI
community could point me in the right direction for that, it would be
greatly appreciated. I have a feeling that this is not one of HPL's
Thanks again for your willingness to help and share your expertise.