This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
OMPI does use those methods, but they can't be used for something like shared memory. So if you want the performance benefit of shared memory, then we have to poll.
On Dec 13, 2010, at 9:00 AM, Hicham Mouline wrote:
> I don't understand 1 thing though and would appreciate your comments.
> In various interfaces, like network sockets, or threads waiting for data from somewhere, there are various solutions based on _not_ checking the state of the socket or some sort of queue continuously, but sort of getting _interrupted_ when there is data around, or like condition variables for threads.
> I am not very clear on these points, but it seems that in these contexts, continuous polling is avoided and so actual CPU usage is usually not close to 100%.
> Why can't something similar be implemented with broadcast for e.g.?
> -----Original Message-----
> From: "Jeff Squyres" [jsquyres_at_[hidden]]
> Date: 13/12/2010 03:55 PM
> To: "Open MPI Users"
> Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
> I think there *was* a decision and it effectively changed how sched_yield() effectively operates, and that it may not do what we expect any more.
> See this thread (the discussion of Linux/sched_yield() comes in the later messages):
> I believe there's similar threads in the MPICH mailing list archives; that's why Dave posted on the OMPI list about it.
> We briefly discussed replacing OMPI's sched_yield() with a usleep(1), but it was shot down.
> users mailing list