Are both the IB HCA and the ethernet interfaces on the same physical
If they're not, the need for multiplexing them is diminished (but, of
course, it depends on what you're trying to do -- if everything is
using huge memory transfers, then your bottleneck will be RAM, not
the bus that the NICs reside on).
That being said, something we have not explored at all is the idea of
multiplexing at the MPI layer. Perhaps something like "this is a low
priority communicator; I want you to only use the 'tcp' BTL on it"
and "this is a high priority communicator; I want you to only use the
'openib' BTL on it".
I haven't thought at all about whether that is possible. It would
probably take some mucking around in both the bml and the ob1 pml.
Hmm. It may or may not be worth it, but I raise the possibility...
On Apr 19, 2007, at 9:18 PM, pooja_at_[hidden] wrote:
> Some of our clusters uses Gigabit Ethernet and Infiniband.
> So we are trying to multiplex them.
> Thanks and Regards
>> On Thu, Apr 19, 2007 at 06:58:37PM -0400, pooja_at_[hidden] wrote:
>>> I am Pooja working with chaitali on this project.
>>> The idea behind this is while running a parallelized code ,if a huge
>>> chunks of serial computation is encountered at that time underlying
>>> network infrastructure can be used for some other data transfer.
>>> This increases the network utilization.
>>> But this (non Mpi) data transfer should not keep Mpi calls blocking.
>>> So we need to give them priorities.
>>> Also we are trying to predict a behavior of the code (like if
>>> there are
>>> more MPi calls coming with short interval or if they are coming
>>> large interval ) based on previous calls.
>>> As a result we can make this mechanism more efficient.
>> Ok, so you have a Cluster with Infiniband a while the network
>> traffic is
>> low you want to utilize the Infiniband network for other data
>> with a lower priority?
>> What does this have to do with TCP or are you using TCP over
>> Christian Leber
>> devel mailing list
> devel mailing list