thanks for taking the time to answer this. I actually reached that
conclusion after trying a simple MPI::Barrier() with both OpenMPI and
Lam-MPI , where both had the same active wait kind of behaviour.
What I'm trying to achive is to have some kind of calculation
server, where the clients can connect through MPI::Intercomm to the
server process with rank 0, and transfer data so that it can perform
computation, but it seems wasteful to have a server group of processes
running at 100% while waiting for the clients.
It would be nice to be able to specify the behaviour in this
case, or do you suggest another approach?
On Fri, Apr 27, 2007 at 07:49:04PM -0400, Jeff Squyres wrote:
| This is actually expected behavior. We make the assumption that MPI
| processes are meant to exhibit as low latency as possible, and
| therefore use active polling for most message passing.