The changes Jeff mentioned are not in the 1.3 branch - not sure if they will come over there or not.
I'm a little concerned in this thread that someone is reporting the process affinity binding changing - that shouldn't be happening, and my guess is that something outside of our control may be changing it.
One other thing to consider that has been an issue around here, and will be an even bigger issue with the change to bind at app start. If your app is threaded, we will bind *all* threads to the same processor, thus potentially hampering performance. We have found that multi-threaded apps often provide better performance if users do *not* set processor affinity via MPI, but instead embed binding calls inside the individual threads so they can be placed on separate processors.
All depends on the exact nature of the application, of course!
On Jun 3, 2009, at 11:40 AM, Ashley Pittman wrote:Yes. It's been fixed in OMPI devel trunk. I'm not sure it made it to the v1.3 branch, but it's definitely not in a released version yet.
Wasn't there a discussion about this recently on the list, OMPI binds
during MPI_Init() so it's possible for memory to be allocated on the
wrong quad, the discussion was about moving the binding to the orte
process as I recall?
I *thought* that HPL did all allocation after MPI_INIT. But I could be wrong. If so, then using numactl to bind before the MPI app starts will likely give better results -- you're right (until we get our fixes in such that we bind pre-main).
Regardless, if something is *changing* the affinity after MPI_INIT, then there's little OMPI can do about that.--
>From my testing of process affinity you tend to get much more consistent
results with it on and much more unpredictable results with it off, I'd
questing that it's working properly if you are seeing a 88-93% range in
users mailing list
users mailing list