Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: George Bosilca (bosilca_at_[hidden])
Date: 2007-10-15 05:58:48


Your conclusion is not necessarily/always true. The MPI_Isend is just
the non blocking version of the send operation. As one can imagine, a
MPI_Isend + MPI_Wait increase the execution path [inside the MPI
library] compared with any blocking point-to-point communication,
leading to worst performances. The main interest of the MPI_Isend
operation is the possible overlap of computation with communications,
or the possible overlap between multiple communications.

However, depending on the size of the message this might not be true.
For large messages, in order to keep the memory usage on the receiver
at a reasonable level, a rendezvous protocol is used. The sender
[after sending a small packet] wait until the receiver confirm the
message exchange (i.e. the corresponding receive operation has been
posted) to send the large data. Using MPI_Isend can lead to longer
execution times, as the real transfer will be delayed until the
program enter in the next MPI call.

In general, using non-blocking operations can improve the performance
of the application, if and only if the application is carefully crafted.

   george.

On Oct 14, 2007, at 2:38 PM, Jeremias Spiegel wrote:

> Hi,
> I'm working with Open-Mpi on an infiniband-cluster and have some
> strange
> effect when using MPI_Isend(). To my understanding this should
> always be
> quicker than MPI_Send() and MPI_Ssend(), yet in my program both
> MPI_Send()
> and MPI_Ssend() reproducably perform quicker than SSend(). Is there
> something
> obvious I'm missing?
>
> Regards,
> Jeremias
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users



  • application/pkcs7-signature attachment: smime.p7s