Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Hybrid program
From: Gabriele Fatigati (g.fatigati_at_[hidden])
Date: 2008-11-20 02:53:49


Is there a way to assign one thread to one core? Also from code, not
necessary with OpenMPI option.

Thanks.

2008/11/19 Stephen Wornom <stephen.wornom_at_[hidden]>:
> Gabriele Fatigati wrote:
>>
>> Ok,
>> but in Ompi 1.3 how can i enable it?
>>
>
> This may not be relevant, but I could not get a hybrid mpi+OpenMP code to
> work correctly.
> Would my problem be related to Gabriele's and perhaps fixed in openmpi 1.3?
> Stephen
>>
>> 2008/11/18 Ralph Castain <rhc_at_[hidden]>:
>>
>>>
>>> I am afraid it is only available in 1.3 - we didn't backport it to the
>>> 1.2
>>> series
>>>
>>>
>>> On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:
>>>
>>>
>>>>
>>>> Hi,
>>>> how can i set "slot mapping" as you told me? With TASK GEOMETRY? Or is
>>>> a new 1.3 OpenMPI feature?
>>>>
>>>> Thanks.
>>>>
>>>> 2008/11/18 Ralph Castain <rhc_at_[hidden]>:
>>>>
>>>>>
>>>>> Unfortunately, paffinity doesn't know anything about assigning threads
>>>>> to
>>>>> cores. This is actually a behavior of Linux, which only allows
>>>>> paffinity
>>>>> to
>>>>> be set at the process level. So, when you set paffinity on a process,
>>>>> you
>>>>> bind all threads of that process to the specified core(s). You cannot
>>>>> specify that a thread be given a specific core.
>>>>>
>>>>> In this case, your two threads/process are sharing the same core and
>>>>> thus
>>>>> contending for it. As you'd expect in that situation, one thread gets
>>>>> the
>>>>> vast majority of the attention, while the other thread is mostly idle.
>>>>>
>>>>> If you can upgrade to the beta 1.3 release, try using the slot mapping
>>>>> to
>>>>> assign multiple cores to each process. This will ensure that the
>>>>> threads
>>>>> for
>>>>> that process have exclusive access to those cores, but will not bind a
>>>>> particular thread to one core - the threads can "move around" across
>>>>> the
>>>>> specified set of cores. Your threads will then be able to run without
>>>>> interfering with each other.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>> On Nov 18, 2008, at 9:18 AM, Gabriele Fatigati wrote:
>>>>>
>>>>>
>>>>>>
>>>>>> Dear OpenMPI developers,
>>>>>> i have a strange problem with mixed program MPI+OPENMP over OpenMPI
>>>>>> 1.2.6. I'm using PJL TASK GEOMETRY in LSF Scheduler, setting 2 MPI
>>>>>> process every compute node, and 2 OMP threads per process. Using
>>>>>> paffinity and maffinity, i've noted that over every node, i have 2
>>>>>> thread that works 100%, and 2 threads doesn't works, or works very
>>>>>> few.
>>>>>>
>>>>>> If i disable paffinity and maffinity, 4 threads works well, without
>>>>>> load imbalance.
>>>>>> I don't understand this issue: paffinity and maffinity should map
>>>>>> every thread over a specific core, optimizing the cache flow, but i
>>>>>> have this without settings there!
>>>>>>
>>>>>> Can i use paffinity and maffinity in mixed MPI+OpenMP program? Or it
>>>>>> works only over MPI thread?
>>>>>>
>>>>>> Thanks in advance.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Ing. Gabriele Fatigati
>>>>>>
>>>>>> CINECA Systems & Tecnologies Department
>>>>>>
>>>>>> Supercomputing Group
>>>>>>
>>>>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>>>>
>>>>>> www.cineca.it Tel: +39 051 6171722
>>>>>>
>>>>>> g.fatigati_at_[hidden]
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> users_at_[hidden]
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> Ing. Gabriele Fatigati
>>>>
>>>> CINECA Systems & Tecnologies Department
>>>>
>>>> Supercomputing Group
>>>>
>>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>>
>>>> www.cineca.it Tel: +39 051 6171722
>>>>
>>>> g.fatigati_at_[hidden]
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>
>>
>>
>>
>
>
> --
> stephen.wornom_at_[hidden]
> 2004 route des lucioles - BP93
> Sophia Antipolis
> 06902 CEDEX
>
> Tel: 04 92 38 50 54
> Fax: 04 97 15 53 51
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Ing. Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing  Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it                    Tel:   +39 051 6171722
g.fatigati_at_[hidden]