Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Tim Mattox (timattox_at_[hidden])
Date: 2007-10-23 08:29:34


You should look at these two FAQ entries:

http://www.open-mpi.org/faq/?category=running#oversubscribing
http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded

To get what you want, you need to force Open MPI to yield the processor rather
than be aggressively waiting for a message.

On 10/23/07, Murat Knecht <murat.knecht_at_[hidden]> wrote:
> Hi,
> Testing a distributed system locally, I couldn't help but notice that a
> blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
> and run-time) the shared memory bt-layer, and specified "tcp, self" to
> be used. Still one core busy. Even on a distributed system I intend to
> perform work, while waiting for incoming requests. For this purpose
> having one core busy waiting for requests is uncomfortable to say the
> least. Does OpenMPI not use some blocking system call to a tcp port
> internally? Since i deactivated the understandably costly shared-memory
> waits, this seems weird to me.
> Someone has an explanation or even better a fix / workaround / solution ?
> thanks,
> Murat
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
 tmattox_at_[hidden] || timattox_at_[hidden]
    I'm a bright... http://www.the-brights.net/