I don't understand 1 thing though and would appreciate your comments.
In various interfaces, like network sockets, or threads waiting for data from somewhere, there are various solutions based on _not_ checking the state of the socket or some sort of Â queue continuously, but sort of getting _interrupted_ when there is data around, or like condition variables for threads.
I am not very clear on these points, but it seems that in these contexts, continuous polling is avoided and so actual CPU usage is usually not close to 100%.
Why can't something similar be implemented with broadcast for e.g.?
From: "Jeff Squyres" [jsquyres_at_[hidden]]
Date: 13/12/2010 03:55 PM
To: "Open MPI Users"
Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
I think there *was* a decision and it effectively changed how sched_yield() effectively operates, and that it may not do what we expect any more.
See this thread (the discussion of Linux/sched_yield() comes in the later messages):
I believe there's similar threads in the MPICH mailing list archives; that's why Dave posted on the OMPI list about it.
We briefly discussed replacing OMPI's sched_yield() with a usleep(1), but it was shot down.