Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Hybrid program
From: Ralph Castain (rhc_at_[hidden])
Date: 2008-11-20 08:51:42


Not in Linux, I'm afraid - you can assign one process to a set of
cores, but Linux doesn't track individual threads.

If you look at OMPI 1.3's man page for mpirun, you'll see some info on
the rank-file mapping. Most of what was done is aimed at the use of
hostfiles where you specify the socket/core(s) to be used for each
rank. You can also provide a socket/core specification on the cmd
line, but I've been looking to see if you can specify something like
"the lowest rank on each node uses cores 1-4, next rank on each node
uses cores 5-8, ...".

I'm not sure the syntax is currently flexible enough for that (and the
people who would know haven't answered my inquiry), but if there is
interest, I imagine it could be expanded to support such specifications.

Ralph

On Nov 20, 2008, at 12:53 AM, Gabriele Fatigati wrote:

> Is there a way to assign one thread to one core? Also from code, not
> necessary with OpenMPI option.
>
> Thanks.
>
> 2008/11/19 Stephen Wornom <stephen.wornom_at_[hidden]>:
>> Gabriele Fatigati wrote:
>>>
>>> Ok,
>>> but in Ompi 1.3 how can i enable it?
>>>
>>
>> This may not be relevant, but I could not get a hybrid mpi+OpenMP
>> code to
>> work correctly.
>> Would my problem be related to Gabriele's and perhaps fixed in
>> openmpi 1.3?
>> Stephen
>>>
>>> 2008/11/18 Ralph Castain <rhc_at_[hidden]>:
>>>
>>>>
>>>> I am afraid it is only available in 1.3 - we didn't backport it
>>>> to the
>>>> 1.2
>>>> series
>>>>
>>>>
>>>> On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:
>>>>
>>>>
>>>>>
>>>>> Hi,
>>>>> how can i set "slot mapping" as you told me? With TASK GEOMETRY?
>>>>> Or is
>>>>> a new 1.3 OpenMPI feature?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> 2008/11/18 Ralph Castain <rhc_at_[hidden]>:
>>>>>
>>>>>>
>>>>>> Unfortunately, paffinity doesn't know anything about assigning
>>>>>> threads
>>>>>> to
>>>>>> cores. This is actually a behavior of Linux, which only allows
>>>>>> paffinity
>>>>>> to
>>>>>> be set at the process level. So, when you set paffinity on a
>>>>>> process,
>>>>>> you
>>>>>> bind all threads of that process to the specified core(s). You
>>>>>> cannot
>>>>>> specify that a thread be given a specific core.
>>>>>>
>>>>>> In this case, your two threads/process are sharing the same
>>>>>> core and
>>>>>> thus
>>>>>> contending for it. As you'd expect in that situation, one
>>>>>> thread gets
>>>>>> the
>>>>>> vast majority of the attention, while the other thread is
>>>>>> mostly idle.
>>>>>>
>>>>>> If you can upgrade to the beta 1.3 release, try using the slot
>>>>>> mapping
>>>>>> to
>>>>>> assign multiple cores to each process. This will ensure that the
>>>>>> threads
>>>>>> for
>>>>>> that process have exclusive access to those cores, but will not
>>>>>> bind a
>>>>>> particular thread to one core - the threads can "move around"
>>>>>> across
>>>>>> the
>>>>>> specified set of cores. Your threads will then be able to run
>>>>>> without
>>>>>> interfering with each other.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>> On Nov 18, 2008, at 9:18 AM, Gabriele Fatigati wrote:
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Dear OpenMPI developers,
>>>>>>> i have a strange problem with mixed program MPI+OPENMP over
>>>>>>> OpenMPI
>>>>>>> 1.2.6. I'm using PJL TASK GEOMETRY in LSF Scheduler, setting
>>>>>>> 2 MPI
>>>>>>> process every compute node, and 2 OMP threads per process. Using
>>>>>>> paffinity and maffinity, i've noted that over every node, i
>>>>>>> have 2
>>>>>>> thread that works 100%, and 2 threads doesn't works, or works
>>>>>>> very
>>>>>>> few.
>>>>>>>
>>>>>>> If i disable paffinity and maffinity, 4 threads works well,
>>>>>>> without
>>>>>>> load imbalance.
>>>>>>> I don't understand this issue: paffinity and maffinity should
>>>>>>> map
>>>>>>> every thread over a specific core, optimizing the cache flow,
>>>>>>> but i
>>>>>>> have this without settings there!
>>>>>>>
>>>>>>> Can i use paffinity and maffinity in mixed MPI+OpenMP program?
>>>>>>> Or it
>>>>>>> works only over MPI thread?
>>>>>>>
>>>>>>> Thanks in advance.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Ing. Gabriele Fatigati
>>>>>>>
>>>>>>> CINECA Systems & Tecnologies Department
>>>>>>>
>>>>>>> Supercomputing Group
>>>>>>>
>>>>>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>>>>>
>>>>>>> www.cineca.it Tel: +39 051 6171722
>>>>>>>
>>>>>>> g.fatigati_at_[hidden]
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> users_at_[hidden]
>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> users_at_[hidden]
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Ing. Gabriele Fatigati
>>>>>
>>>>> CINECA Systems & Tecnologies Department
>>>>>
>>>>> Supercomputing Group
>>>>>
>>>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>>>
>>>>> www.cineca.it Tel: +39 051 6171722
>>>>>
>>>>> g.fatigati_at_[hidden]
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> stephen.wornom_at_[hidden]
>> 2004 route des lucioles - BP93
>> Sophia Antipolis
>> 06902 CEDEX
>>
>> Tel: 04 92 38 50 54
>> Fax: 04 97 15 53 51
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
> Ing. Gabriele Fatigati
>
> CINECA Systems & Tecnologies Department
>
> Supercomputing Group
>
> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>
> www.cineca.it Tel: +39 051 6171722
>
> g.fatigati_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users