Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] Hybrid OpenMPI / OpenMP programming
From: Auclair Francis (francis.auclair_at_[hidden])
Date: 2012-02-29 05:08:09

Dear Open-MPI users,

Our code is currently running Open-MPI (1.5.4) with SLURM on a NUMA
machine (2 sockets by nodes and 4 cores by socket) with basically two
levels of implementation for Open-MPI:
- at lower level n "Master" MPI-processes (one by socket) are
simultaneously runned by dividing classically the physical domain into n
- while at higher level 4n MPI-processes are spawn to run a sparse
Poisson solver.
At each time step, the code is thus going back and forth between these
two levels of implementation using two MPI communicators. This also
means that during about half of the computation time, 3n cores are at
best sleeping (if not 'waiting' at a barrier) when not inside "Solver
routines". We consequently decided to implement OpenMP functionality in
our code when solver was not running (we declare one single "parallel"
region and use the omp "master" command when OpenMP threads are not
active). We however face several difficulties:

a) It seems that both the 3n-MPI processes and the OpenMP threads
'consume processor cycles while waiting'. We consequently tried: mpirun
-mpi_yield_when_idle 1 Â… , export OMP_WAIT_POLICY=passive or export
KMP_BLOCKTIME=0 ... The latest finally leads to an interesting reduction
of computing time but worsens the second problem we have to face (see

b) We managed to have a "correct" (?) implementation of our MPI-processes
on our sockets by using: mpirun -bind-to-socket -bysocket -np 4n Â…
However if OpenMP threads initially seem to scatter on each socket (one
thread by core) they slowly migrate to the same core as their 'Master
MPI process' or gather on one or two cores by socketÂ… We play around
with the environment variable KMP_AFFINITY but the best we could obtain
was a pinning of the OpenMP threads to their own core... disorganizing
at the same time the implementation of the 4n Level-2 MPI processes.
When added, neither the specification of a rankfile nor the mpirun
option -x IPATH_NO_CPUAFFINITY=1 seem to change significantly the situation.
This comportment looks rather inefficient but so far we did not manage
to prevent the migration of the 4 threads to at most a couple of cores !

Is there something wrong in our "Hybrid" implementation?
Do you have any advices?
Thanks for your help,