Did you try channel bonding? If your OS is Linux, there are plenty of "howto" on the internet which will tell you how to do it.
However, your CPU might be the bottleneck in this case. How much of CPU horsepower is available at 140MB/s?
If the CPU *is* the bottleneck, changing your network driver (e.g. from interrupt-based to poll-based packet transfer) might help. If you are unfamiliar with writing network drivers for your OS, this may not be a trivial task, though.
Oh, and like I pointed out last time, if all of the above seem OK, try putting your second link to a separate PC and see if you can gate twice the throughput. If so, then the ECMP implementation of your IP stack is what is causing the problem. This is the hardest one to fix. You could rewrite a few routines in ipv4 processing and recompile the Kernel, if you are familiar with Kernel building and your OS is Linux.
On 10/23/06, Jayanta Roy <firstname.lastname@example.org> wrote:
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host a.out
How to combine these two port or use a proper routing table in place host
file? I am using openmpi-1.1 version.
users mailing list
Devil wanted omnipresence;
He therefore created communists.