(now that we're back from vacation)
Actually, this could be an issue. Is hyperthreading enabled on your machine?
Can you send the text output from running hwloc's "lstopo" command on your compute nodes?
I ask because if hyperthreading is enabled, OMPI might be assigning one process per *hyerthread* (vs. one process per *core*). And that could be disastrous for performance.
On Dec 22, 2010, at 2:25 PM, Gilbert Grosdidier wrote:
> Hi David,
> Yes, I set mpi_affinity_alone to 1. Is that right and sufficient, please ?
> Thanks for your help, Best, G.
> Le 22/12/2010 20:18, David Singleton a écrit :
>> Is the same level of processes and memory affinity or binding being used?
>> On 12/21/2010 07:45 AM, Gilbert Grosdidier wrote:
>>> Yes, there is definitely only 1 process per core with both MPI implementations.
>>> Thanks, G.
>>> Le 20/12/2010 20:39, George Bosilca a écrit :
>>>> Are your processes places the same way with the two MPI implementations? Per-node vs. per-core ?
> users mailing list
For corporate legal information go to: