Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Binding to Core Warning
From: Saliya Ekanayake (esaliya_at_[hidden])
Date: 2014-02-26 16:03:05


Is it possible to bind to cores of multiple sockets? Say I have a machine
with 2 sockets each with 4 cores and if I run 8 threads with 1 proc can I
utilize all 8 cores for 8 threads?

Thank you for speedy replies

Saliya

On Wed, Feb 26, 2014 at 3:21 PM, Ralph Castain <rhc_at_[hidden]> wrote:

>
> On Feb 26, 2014, at 12:17 PM, Saliya Ekanayake <esaliya_at_[hidden]> wrote:
>
> I have a followup question on this. In our application we have parallel
> for loops similar to OMP parallel for. I noticed that in order to gain
> speedup with threads I've to set --bind-to none, otherwise multiple threads
> will bind to same core giving no increase in performance. For example, I
> get following (attached) performance for a simple 3point stencil
> computation run with T threads on 1 MPI process on 1 node (Tx1x1).
>
> My understanding is even when there are multiple procs per node we should
> use --bind-to none in order to get performance with threads. Is this
> correct? Also, what's the disadvantage of not using --bind-to core?
>
>
> Your best performance with threads comes when you bind each process to
> multiple cores. Binding helps performance by ensuring your memory is always
> local, and provides some optimized scheduling benefits. You can bind to
> multiple cores by adding the qualifier "pe=N" to your mapping definition,
> like this:
>
> mpirun --map-by socket:pe=4 ....
>
> The above example will map processes by socket, and bind each process to 4
> cores.
>
> HTH
> Ralph
>
>
> Thank you,
> Saliya
>
>
> On Wed, Feb 26, 2014 at 11:01 AM, Saliya Ekanayake <esaliya_at_[hidden]>wrote:
>
>> Thank you Ralph, I'll check this.
>>
>>
>> On Wed, Feb 26, 2014 at 10:04 AM, Ralph Castain <rhc_at_[hidden]> wrote:
>>
>>> It means that OMPI didn't get built against libnuma, and so we can't
>>> ensure that memory is being bound local to the proc binding. Check to see
>>> if numactl and numactl-devel are installed, or you can turn off the warning
>>> using "-mca hwloc_base_mem_bind_failure_action silent"
>>>
>>>
>>> On Feb 25, 2014, at 10:32 PM, Saliya Ekanayake <esaliya_at_[hidden]>
>>> wrote:
>>>
>>> Hi,
>>>
>>> I tried to run an MPI Java program with --bind-to core. I receive the
>>> following warning and wonder how to fix this.
>>>
>>>
>>> WARNING: a request was made to bind a process. While the system
>>> supports binding the process itself, at least one node does NOT
>>> support binding memory to the process location.
>>>
>>> Node: 192.168.0.19
>>>
>>> This is a warning only; your job will continue, though performance may
>>> be degraded.
>>>
>>>
>>> Thank you,
>>> Saliya
>>>
>>> --
>>> Saliya Ekanayake esaliya_at_[hidden]
>>> Cell 812-391-4914 Home 812-961-6383
>>> http://saliya.org
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>>
>>
>> --
>> Saliya Ekanayake esaliya_at_[hidden]
>> Cell 812-391-4914 Home 812-961-6383
>> http://saliya.org
>>
>
>
>
> --
> Saliya Ekanayake esaliya_at_[hidden]
> Cell 812-391-4914 Home 812-961-6383
> http://saliya.org
> <3pointstencil.png>_______________________________________________
>
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Saliya Ekanayake esaliya_at_[hidden]
Cell 812-391-4914 Home 812-961-6383
http://saliya.org