Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Durga Choudhury (dpchoudh_at_[hidden])
Date: 2006-10-23 10:26:18

Did you try channel bonding? If your OS is Linux, there are plenty of
"howto" on the internet which will tell you how to do it.

However, your CPU might be the bottleneck in this case. How much of CPU
horsepower is available at 140MB/s?

If the CPU *is* the bottleneck, changing your network driver (e.g. from
interrupt-based to poll-based packet transfer) might help. If you are
unfamiliar with writing network drivers for your OS, this may not be a
trivial task, though.

Oh, and like I pointed out last time, if all of the above seem OK, try
putting your second link to a separate PC and see if you can gate twice the
throughput. If so, then the ECMP implementation of your IP stack is what is
causing the problem. This is the hardest one to fix. You could rewrite a few
routines in ipv4 processing and recompile the Kernel, if you are familiar
with Kernel building and your OS is Linux.

On 10/23/06, Jayanta Roy <jroy_at_[hidden]> wrote:
> Hi,
> Sometime before I have posted doubts about using dual gigabit support
> fully. See I get ~140MB/s full duplex transfer rate in each of following
> runs.....
> mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
> mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host a.out
> How to combine these two port or use a proper routing table in place host
> file? I am using openmpi-1.1 version.
> -Jayanta
> _______________________________________________
> users mailing list
> users_at_[hidden]

Devil wanted omnipresence;
He therefore created communists.