Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: George Bosilca (bosilca_at_[hidden])
Date: 2006-10-23 13:57:14

I don't know what your bandwidth tester look like, but 140MB/s it's
way too much for a single Gige card, except if it's a bidirectional
bandwidth. Usually, on a new generation Gige card (Broadcom
Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express) with a
AMD processor (AMD Athlon(tm) 64 Processor 3500+) I only manage to
get around 800Mb/s out of a point-to-point transfer. With an external
card not on the OCI-express bus I barely get 600Mb/s...

Why you don't use a real network performance tool such as Netpipe. At
least it will insure you that the bandwidth is the one you expect.


On Oct 23, 2006, at 4:56 AM, Jayanta Roy wrote:

> Hi,
> Sometime before I have posted doubts about using dual gigabit support
> fully. See I get ~140MB/s full duplex transfer rate in each of
> following
> runs.....
> mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
> mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host a.out
> How to combine these two port or use a proper routing table in
> place host
> file? I am using openmpi-1.1 version.
> -Jayanta
> _______________________________________________
> users mailing list
> users_at_[hidden]