Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: George Bosilca (bosilca_at_[hidden])
Date: 2007-10-15 17:08:57

That's one possible way of achieving the overlap. However, it's not a
portable solution as right now from all open source libraries, only
Open MPI propose this "helper" thread (as far as I know).

Another way of achieving the same goal, it's to have a truly thread
safe MPI library and the user will have a thread blocked in a
MPI_Recv that will eventually complete at the end of the application.
This approach, seems more user friendly, as the user is in control of
when the overlap will occur.


On Oct 15, 2007, at 2:56 PM, Eric Thibodeau wrote:

> George,
> For completedness's sake, from what I understand here, the only
> way to get "true" communications and computation overlap is to have
> and "MPI broker" thread which would take care of all communications
> in the form of sync MPI calls. It is that thread which you call
> asynchronously and then let it manage the communications in the
> back... correct?
> Eric
> Le October 15, 2007, George Bosilca a écrit :
>> Eric,
>> No there is no documentation about this on Open MPI. However, what I
>> described here, is not related to Open MPI, it's a general problem
>> with most/all MPI libraries. There are multiple scenarios where non
>> blocking communications can improve the overall performance of a
>> parallel application. But, in general, the reason is related to
>> overlapping communications with computations, or communications with
>> communications.
>> The problem is that using non blocking will increase the critical
>> path compared with blocking, which usually never help at improving
>> performance. Now I'll explain the real reason behind that. The REAL
>> problem is that usually a MPI library cannot make progress while the
>> application is not in an MPI call. Therefore, as soon as the MPI
>> library return after posting the non-blocking send, no progress is
>> possible on that send until the user goes back in the MPI library. If
>> you compare this with the case of a blocking send, there the library
>> do not return until the data is pushed on the network buffers, i.e.
>> the library is the one in control until the send is completed.
>> Thanks,
>> george.
>> On Oct 15, 2007, at 2:23 PM, Eric Thibodeau wrote:
>>> Hello George,
>>> What you're saying here is very interesting. I am presently
>>> profiling communication patterns for Parallel Genetic Algorithms
>>> and could not figure out why the async versions tended to be worst
>>> than the sync counterpart (imho, that was counter-intuitive). What
>>> you're basically saying here is that the async communications
>>> actually add some sychronization overhead that can only be
>>> compensated if the application overlaps computation with the async
>>> communications? Is there some "official" reference/documentation to
>>> this behaviour from OpenMPI (I know the MPI standard doesn't define
>>> the actual implementation of the communications and therefore lets
>>> the implementer do as he pleases).
>>> Thanks,
>>> Eric
>>> Le October 15, 2007, George Bosilca a écrit :
>>>> Your conclusion is not necessarily/always true. The MPI_Isend is
>>>> just
>>>> the non blocking version of the send operation. As one can
>>>> imagine, a
>>>> MPI_Isend + MPI_Wait increase the execution path [inside the MPI
>>>> library] compared with any blocking point-to-point communication,
>>>> leading to worst performances. The main interest of the MPI_Isend
>>>> operation is the possible overlap of computation with
>>>> communications,
>>>> or the possible overlap between multiple communications.
>>>> However, depending on the size of the message this might not be
>>>> true.
>>>> For large messages, in order to keep the memory usage on the
>>>> receiver
>>>> at a reasonable level, a rendezvous protocol is used. The sender
>>>> [after sending a small packet] wait until the receiver confirm the
>>>> message exchange (i.e. the corresponding receive operation has been
>>>> posted) to send the large data. Using MPI_Isend can lead to longer
>>>> execution times, as the real transfer will be delayed until the
>>>> program enter in the next MPI call.
>>>> In general, using non-blocking operations can improve the
>>>> performance
>>>> of the application, if and only if the application is carefully
>>>> crafted.
>>>> george.
>>>> On Oct 14, 2007, at 2:38 PM, Jeremias Spiegel wrote:
>>>>> Hi,
>>>>> I'm working with Open-Mpi on an infiniband-cluster and have some
>>>>> strange
>>>>> effect when using MPI_Isend(). To my understanding this should
>>>>> always be
>>>>> quicker than MPI_Send() and MPI_Ssend(), yet in my program both
>>>>> MPI_Send()
>>>>> and MPI_Ssend() reproducably perform quicker than SSend(). Is
>>>>> there
>>>>> something
>>>>> obvious I'm missing?
>>>>> Regards,
>>>>> Jeremias
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>> --
>>> Eric Thibodeau
>>> Neural Bucket Solutions Inc.
>>> T. (514) 736-1436
>>> C. (514) 710-0517
> --
> Eric Thibodeau
> Neural Bucket Solutions Inc.
> T. (514) 736-1436
> C. (514) 710-0517

  • application/pkcs7-signature attachment: smime.p7s