You should look at these two FAQ entries:
To get what you want, you need to force Open MPI to yield the processor rather
than be aggressively waiting for a message.
On 10/23/07, Murat Knecht <murat.knecht_at_[hidden]> wrote:
> Testing a distributed system locally, I couldn't help but notice that a
> blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
> and run-time) the shared memory bt-layer, and specified "tcp, self" to
> be used. Still one core busy. Even on a distributed system I intend to
> perform work, while waiting for incoming requests. For this purpose
> having one core busy waiting for requests is uncomfortable to say the
> least. Does OpenMPI not use some blocking system call to a tcp port
> internally? Since i deactivated the understandably costly shared-memory
> waits, this seems weird to me.
> Someone has an explanation or even better a fix / workaround / solution ?
> users mailing list
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmattox_at_[hidden] || timattox_at_[hidden]
I'm a bright... http://www.the-brights.net/