Gus Correa wrote:
> Hi Matthew
> 5) Are you setting processor affinity on mpiexec?
> mpiexec -mca mpi_paffinity_alone 1 -np ... bla, bla ...
Good point. This option optimizes processor affinity on the assumption
that no other jobs are running. If you ran 2 MPI jobs with this option,
they would attempt to use the same logical processors, rather than
spreading the work effectively.
I have doubts whether the mpi_affinity could be relied upon with
HyperThreading enabled; it would work OK if it understood how to avoid
multiple processes on the same core.
If you don't find an option inside openmpi to specify which logicals
your jobs should use, you could do it by mpiexec -np 4 taskset...
taking care to use a different core for each process (also different
between jobs running together). You would have to check on your machine
whether the taskset options would be such as -c 0,2,4,6 for separate
cores on one package and -c 8,10,12,14 for the other, or some other
scheme. /proc/cpuinfo would give valuable clues, even more
/usr/sbin/irqbalance -debug (or wherever it lives on your system).
Without affinity setting, you could also run into problems when running
out of individual cores and forcing some pairs of processes to run
(quite slowly) on single cores, while others run full speed on other cores.