Hi,
thanks for answering. Unfortunately, I did try that, too. The point is that i don't understand the ressource consumption. Even if the processor is yielded, it still is busy waiting, wasting system resources which could otherwise be used for actual work. Isn't there some way to activate an interrupt mechanism, so that the wait/recv blocks the thread, e.g. puts it to sleep, until notified?

Murat

Tim Mattox schrieb:
You should look at these two FAQ entries:

http://www.open-mpi.org/faq/?category=running#oversubscribing
http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded

To get what you want, you need to force Open MPI to yield the processor rather
than be aggressively waiting for a message.

On 10/23/07, Murat Knecht <murat.knecht@student.hpi.uni-potsdam.de> wrote:
  
Hi,
Testing a distributed system locally, I couldn't help but notice that a
blocking MPI_Recv causes 100% CPU load. I deactivated (at both compile-
and run-time) the shared memory bt-layer, and specified "tcp, self" to
be used. Still one core busy. Even on a distributed system I intend to
perform work, while waiting for incoming requests. For this purpose
having one core busy waiting for requests is uncomfortable to say the
least. Does OpenMPI not use some blocking system call to a tcp port
internally? Since i deactivated the understandably costly shared-memory
waits, this seems weird to me.
Someone has an explanation or even better a fix / workaround / solution ?
thanks,
Murat
_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users