Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: George Bosilca (bosilca_at_[hidden])
Date: 2006-10-23 13:57:14

I don't know what your bandwidth tester look like, but 140MB/s it's
way too much for a single Gige card, except if it's a bidirectional
bandwidth. Usually, on a new generation Gige card (Broadcom
Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express) with a
AMD processor (AMD Athlon(tm) 64 Processor 3500+) I only manage to
get around 800Mb/s out of a point-to-point transfer. With an external
card not on the OCI-express bus I barely get 600Mb/s...

Why you don't use a real network performance tool such as Netpipe. At
least it will insure you that the bandwidth is the one you expect.


On Oct 23, 2006, at 4:56 AM, Jayanta Roy wrote:

> Hi,
> Sometime before I have posted doubts about using dual gigabit support
> fully. See I get ~140MB/s full duplex transfer rate in each of
> following
> runs.....
> mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
> mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host a.out
> How to combine these two port or use a proper routing table in
> place host
> file? I am using openmpi-1.1 version.
> -Jayanta
> _______________________________________________
> users mailing list
> users_at_[hidden]