Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] iprobe and opal_progress
From: Terry Dontje (Terry.Dontje_at_[hidden])
Date: 2008-06-18 11:00:29


Ok however, I've seen a 40-150us hit by calling opal_progress. Which is
why I was hoping for something lighter weight.

--td
George Bosilca wrote:
> No, please call the opal_progress. Otherwise, you will create
> different behavior based on the available networks, basically the
> networks that register a socket and those who don't. It might not be a
> big deal today (except if the user call MPI_Iprobe to progress
> communications), as TCP is the only network that use file descriptors,
> but it will be in the case of multithreaded applications.
>
> george.
>
> On Jun 18, 2008, at 4:25 PM, Terry Dontje wrote:
>
>> Ok, I'll see if I can figure out the below. Though is this really
>> something that can be used in both MPI_Iprobe and MPI_Probe? One
>> other question, is the use of opal_progress in MPI_Iprobe the right
>> thing to do? Is there something a little lighter weight
>> (bml_progress maybe)?
>>
>> --td
>>
>> George Bosilca wrote:
>>> I kind of remember that we had a discussion about this long ago, and
>>> that we decided to have it this way for latency. Now looking at the
>>> code it seems way to ugly to me. I think Brian have a point. MPIPobe
>>> and MPI_Iprobe are MPI functions, and they are expected to make
>>> progress all the time. So call opal_progress, then do the probe
>>> seems like the smartest and simplest approach.
>>>
>>> However, if you want to do this, then it's better if we do it in the
>>> right way. What we have today in the PML OB1 or probe is horribly
>>> expensive. Initialize a complete request, that will never be used
>>> for anything than matching is an overkill. The only fields that you
>>> really need are the flags and the matching information. How about,
>>> creating a request, setting these flags and then call the matching
>>> directly ? This way, we can create a special path or probes, and
>>> this will remove some ifs from the critical path for receives ...
>>>
>>> george.
>>>
>>> On Jun 18, 2008, at 3:57 PM, Brian W. Barrett wrote:
>>>
>>>> On Wed, 18 Jun 2008, Terry Dontje wrote:
>>>>
>>>>> Jeff Squyres wrote:
>>>>>> Perhaps we did that as a latency optimization...?
>>>>>> George / Brian / Galen -- do you guys know/remember why this was
>>>>>> done?
>>>>>> On the surface, it looks like it would be ok to call progress and
>>>>>> check again to see if it found the match. Can anyone think of a
>>>>>> deeper reason not to?
>>>>> If it is ok to check again, my next question is going to be how?
>>>>> Because after looking at the code some more I found iprobe
>>>>> requests are not actually queued. So can I just do another
>>>>> MCA_PML_OB1_RECV_REQUEST_START on the init'd IPROBE_REQUEST after
>>>>> the call opal_progress to force a search on the unexpected queue
>>>>> or do I need to FINI the request and regenerate it again?
>>>>
>>>> I think you'd have to re-init the request at a minimum. In other
>>>> words, just always call opal_progres at the top of iprobe and be
>>>> done :).
>>>>
>>>> Brian
>>>> _______________________________________________
>>>> devel mailing list
>>>> devel_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>>
>>
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>