On Feb 26, 2014, at 12:17 PM, Saliya Ekanayake <esaliya@gmail.com> wrote:

I have a followup question on this. In our application we have parallel for loops similar to OMP parallel for. I noticed that in order to gain speedup with threads I've to set --bind-to none, otherwise multiple threads will bind to same core giving no increase in performance. For example, I get following (attached) performance for a simple 3point stencil computation run with T threads on 1 MPI process on 1 node (Tx1x1). 

My understanding is even when there are multiple procs per node we should use --bind-to none in order to get performance with threads. Is this correct? Also, what's the disadvantage of not using --bind-to core?

Your best performance with threads comes when you bind each process to multiple cores. Binding helps performance by ensuring your memory is always local, and provides some optimized scheduling benefits. You can bind to multiple cores by adding the qualifier "pe=N" to your mapping definition, like this:

mpirun --map-by socket:pe=4 ....

The above example will map processes by socket, and bind each process to 4 cores.

HTH
Ralph


Thank you,
Saliya


On Wed, Feb 26, 2014 at 11:01 AM, Saliya Ekanayake <esaliya@gmail.com> wrote:
Thank you Ralph, I'll check this.


On Wed, Feb 26, 2014 at 10:04 AM, Ralph Castain <rhc@open-mpi.org> wrote:
It means that OMPI didn't get built against libnuma, and so we can't ensure that memory is being bound local to the proc binding. Check to see if numactl and numactl-devel are installed, or you can turn off the warning using "-mca hwloc_base_mem_bind_failure_action silent"


On Feb 25, 2014, at 10:32 PM, Saliya Ekanayake <esaliya@gmail.com> wrote:

Hi,

I tried to run an MPI Java program with --bind-to core. I receive the following warning and wonder how to fix this.


WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to the process location.

  Node:  192.168.0.19

This is a warning only; your job will continue, though performance may
be degraded.


Thank you,
Saliya
_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--



--
Saliya Ekanayake esaliya@gmail.com 
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
<3pointstencil.png>_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users