Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Running OpenMPI on SGI Altix with 4096 cores : very poor performance
From: Gilbert Grosdidier (Gilbert.Grosdidier_at_[hidden])
Date: 2011-01-06 16:10:58


Hi Jeff,

  Where's located lstopo command on SuseLinux, please ?
And/or hwloc-bind, which seems related to it ?

  Thanks, G.

Le 06/01/2011 21:21, Jeff Squyres a écrit :
> (now that we're back from vacation)
>
> Actually, this could be an issue. Is hyperthreading enabled on your machine?
>
> Can you send the text output from running hwloc's "lstopo" command on your compute nodes?
>
> I ask because if hyperthreading is enabled, OMPI might be assigning one process per *hyerthread* (vs. one process per *core*). And that could be disastrous for performance.
>
>
>
> On Dec 22, 2010, at 2:25 PM, Gilbert Grosdidier wrote:
>
>> Hi David,
>>
>> Yes, I set mpi_affinity_alone to 1. Is that right and sufficient, please ?
>>
>> Thanks for your help, Best, G.
>>
>>
>>
>> Le 22/12/2010 20:18, David Singleton a écrit :
>>> Is the same level of processes and memory affinity or binding being used?
>>>
>>> On 12/21/2010 07:45 AM, Gilbert Grosdidier wrote:
>>>> Yes, there is definitely only 1 process per core with both MPI implementations.
>>>>
>>>> Thanks, G.
>>>>
>>>>
>>>> Le 20/12/2010 20:39, George Bosilca a écrit :
>>>>> Are your processes places the same way with the two MPI implementations? Per-node vs. per-core ?
>>>>>
>>>>> george.