Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] busy wait in MPI_Recv
From: Brian Budge (brian.budge_at_[hidden])
Date: 2010-10-20 10:59:01

Thanks Dick, Eugene. That's what I figured. I was just hoping there might
be some more obscure MPI functions that might do what I want. I'll go ahead
and write my own yielding wrapper on irecv.

Thanks again,

sent from mobile phone

On Oct 20, 2010 5:24 AM, "Richard Treumann" <treumann_at_[hidden]> wrote:


Most HPC applications are run with one processor and one working thread per
MPI process. In this case, the node is not being used for other work so if
the MPI process does release a processor, there is nothing else important
for it to do anyway.

In these applications, the blocking MPI call (like MPI_Recv) is issued only
when there is no more computation that can be done until the MPI_Recv
returns with with the message.

Unless your application has other threads that can make valuable use of the
processor freed up by making MPI_Recv do yields, the polling "overhead" is
probably not something to worry about.

If you do have other work available for the freed processor to turn to, the
"problem" may be worth solving. MPI implementations, in general, default to
a polling approach because it makes the MPI_Recv faster and if there is
nothing else important for the processor to turn to, a fast MPI_Recv is what

Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

 From: Brian Budge <brian.budge_at_[hidden]> To: Open MPI Users <
users_at_[hidden]> Date: 10/19/2010 09:47 PM Subject: [OMPI users] busy
wait in MPI_Recv Sent by: users-bounces_at_[hidden]

Hi all -

I just ran a small test to find out the overhead of an MPI_Recv call
when no communication...

users mailing list

users mailing list