Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Allan Menezes (amenezes007_at_[hidden])
Date: 2006-02-24 22:11:07


Hi,
  I have a 16 node AMD/P4 machine cluster running Oscar 4.2.1 Beta and
FC4. Each machine has two Gigabit network cards. One being realtek8169
all connected to a netgear GS116 gigabit switch with max MTU =1500 and
the other NIC being aDlink Syskonnect chipset gigabit card connected to
a managed NetgearGS724T Gigabit switch with Jumbo MTU enabled on the
switch and each Dlink card's MTU set at 9000 from the ifcfg-eth1 file. I
want to know how I can use open mpi (any version >=1.01) to use ethernet
bonding to get 2 gig eth cards to boost performance. I have 512MB memory
on each machine for a total of 8 GigBytes Memory and a hard disk on each
node.. I use HPL/Linpack to run benchmarks and get 28.36GFlops with open
mpi 1.1a1br9098 and with mpich2 1.03 i get 28.7Gflops using only one
ethernet (GIg Dlink) with jumbo frames on both benchmarks at MTU=9000. I
use -mca btl tcp on openmpi. The N = 26760 and NB= 120 for HPL.dat at
P=4 Q=4 for 16 processors for both benchmarks using openmpi and mpich2.
Can any one tell me if I will get a performance increasse > 29GFlops
using the two Switches and 2 GigEth Nic cards per node if I use Ethernet
Channel Boning in FC4?
Thank you,
Allan Menezes