Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Is there an mca parameter equivalent to -bind-to-core?
From: Eugene Loh (eugene.loh_at_[hidden])
Date: 2011-03-23 17:13:37


Gus Correa wrote:

> Ralph Castain wrote:
>
>> On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:
>>
>>> Gustavo Correa wrote:
>>>
>>>> Dear OpenMPI Pros
>>>>
>>>> Is there an MCA parameter that would do the same as the mpiexec
>>>> switch '-bind-to-core'?
>>>> I.e., something that I could set up not in the mpiexec command line,
>>>> but for the whole cluster, or for an user, etc.
>>>>
>>>> In the past I used '-mca mpi mpi_paffinity_alone=1'.
>>>
>>
>> Must be a typo here - the correct command is '-mca
>> mpi_paffinity_alone 1'
>>
>>>> But that was before '-bind-to-core' came along.
>>>> However, my recollection of some recent discussions here in the list
>>>> is that the latter would not do the same as '-bind-to-core',
>>>> and that the recommendation was to use '-bind-to-core' in the
>>>> mpiexec command line.
>>>
>>
>> Just to be clear: mpi_paffinity_alone=1 still works and will cause
>> the same behavior as bind-to-core.
>>
>>
>>> A little awkward, but how about
>>>
>>> --bycore rmaps_base_schedule_policy core
>>> --bysocket rmaps_base_schedule_policy socket
>>> --bind-to-core orte_process_binding core
>>> --bind-to-socket orte_process_binding socket
>>> --bind-to-none orte_process_binding none
>>>
>>> _______________________________________________
>>
>
> Thank you Ralph and Eugene
>
> Ralph, forgive me the typo in the previous message, please.
> Equal sign inside the openmpi-mca-params.conf file,
> but no equal sign on the mpiexec command line, right?
>
> I am using OpenMPI 1.4.3
> I inserted the line
> "mpi_paffinity_alone = 1"
> in my openmpi-mca-params.conf file, following Ralph's suggestion
> that it is equivalent to '-bind-to-core'.
>
> However, now when I do "ompi_info -a",
> the output shows the non-default value 1 twice in a row,
> then later it shows the default value 0 again!
> Please see the output enclosed below.
>
> I am confused.
>
> 1) Is this just a glitch in ompi_info,
> or did mpi_paffinity_alone get reverted to zero?
>
> 2) How can I increase the verbosity level to make sure I have processor
> affinity set (i.e. that the processes are bound to cores/processors)?

Just a quick answer on 2). The FAQ
http://www.open-mpi.org/faq/?category=tuning#using-paffinity-v1.4 (or
"man mpirun" or "mpirun --help") mentions --report-bindings.

If this is on a Linux system with numactl, you can also try "mpirun ...
numactl --show".

> ##########
>
> ompi_info -a
>
> ...
>
> MCA mpi: parameter "mpi_paffinity_alone" (current
> value: "1", data source: file
> [/home/soft/openmpi/1.4.3/gnu-intel/etc/openmpi-mca-params.conf],
> synonym of: opal_paffinity_alone)
> If nonzero, assume that this job is the
> only (set of) process(es) running on each node and bind processes to
> processors, starting with processor ID 0
>
> MCA mpi: parameter "mpi_paffinity_alone" (current
> value: "1", data source: file
> [/home/soft/openmpi/1.4.3/gnu-intel/etc/openmpi-mca-params.conf],
> synonym of: opal_paffinity_alone)
> If nonzero, assume that this job is the
> only (set of) process(es) running on each node and bind processes to
> processors, starting with processor ID 0
>
> ...
>
> [ ... and after 'mpi_leave_pinned_pipeline' ...]
>
> MCA mpi: parameter "mpi_paffinity_alone" (current
> value: "0", data source: default value)
> If nonzero, assume that this job is the
> only (set of) process(es) running on each node and bind processes to
> processors, starting with processor ID 0
>
> ...
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users