Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] busy wait in MPI_Recv
From: Richard Treumann (treumann_at_[hidden])
Date: 2010-10-20 08:21:22


Most HPC applications are run with one processor and one working thread
per MPI process. In this case, the node is not being used for other work
so if the MPI process does release a processor, there is nothing else
important for it to do anyway.

In these applications, the blocking MPI call (like MPI_Recv) is issued
only when there is no more computation that can be done until the MPI_Recv
returns with with the message.

Unless your application has other threads that can make valuable use of
the processor freed up by making MPI_Recv do yields, the polling
"overhead" is probably not something to worry about.

If you do have other work available for the freed processor to turn to,
the "problem" may be worth solving. MPI implementations, in general,
default to a polling approach because it makes the MPI_Recv faster and if
there is nothing else important for the processor to turn to, a fast
MPI_Recv is what matters.

Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

Brian Budge <brian.budge_at_[hidden]>
Open MPI Users <users_at_[hidden]>
10/19/2010 09:47 PM
[OMPI users] busy wait in MPI_Recv
Sent by:

Hi all -

I just ran a small test to find out the overhead of an MPI_Recv call
when no communication is occurring. It seems quite high. I noticed
during my google excursions that openmpi does busy waiting. I also
noticed that the option to -mca mpi_yield_when_idle seems not to help
much (in fact, turning on the yield seems only to slow down the
program). What is the best way to reduce this polling cost during
low-communication invervals? Should I write my own recv loop that
sleeps for short periods? I don't want to go write someone that is
possibly already done much better in the library :)

users mailing list