This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
It doesn't sound reasonable to me. There is a reason for this, and I
think it's a good reason. The sendi function work for some devices as
a fast path for sending data, when the network is not flooded.
However, in the case sendi cannot do the job we expect, the fact that
it return the descriptor save us a call (we don't have to do the alloc
call later). Therefore, in the PML we already have the descriptor and
we can hand it back to the BTL, which give a chance for asynchronous
progress later on. Without this descriptor, the only option the PML
have is to put the PML request in a queue, and to try to send it
later, which is very expensive.
I don't see any good reason not to have it. The fact that it make the
BTL a little bit more complex is not a good reason, as we will
exchange performance against coding facilities.
On Feb 23, 2009, at 10:28 , Jeff Squyres wrote:
> Sounds reasonable to me. George / Brian?
> On Feb 21, 2009, at 2:11 AM, Eugene Loh wrote:
>> What: Eliminate the "descriptor" argument from sendi functions.
>> Why: The only thing this argument is used for is so that the sendi
>> function can allocate a descriptor in the event that the "send"
>> cannot complete. But, in that case, the sendi reverts to the PML,
>> where there is already code to allocate a descriptor. So, each
>> sendi function (in each BTL that has a sendi function) must have
>> code that is already in the PML anyhow. This is unnecessary extra
>> coding and not clean design.
>> Where: In each BTL that has a sendi function (only three, and
>> there are not all used) and in the function prototype and at the
>> PML calling site.
>> When: I'd like to incorporate this in the shared-memory latency
>> work I'm doing that we're targetting for 1.3.x.
>> Timeout: Feb 27.
>> devel mailing list
> Jeff Squyres
> Cisco Systems
> devel mailing list