Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
> Hi there,
> I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not
> specify a hostfile.
> Lately I'm having performance problems when running my mpi-app this way:
> mpiexec -n 2 ./mpi-app config.ini
> Both mpi-app processes are running on cpu0, leaving cpu1 idle.
> After reading the mpirun manpage, it seems that openmpi bind tasks to
> cpus in a round-robin way, meaning that this should not happen.
> But given my problem, I assume that it's not detecting this is a 2-way
> smp system, (assuming a UP system) and binding both tasks to cpu0..
> Is this correct?
By default I do not think Open MPI does any process affinity (although I
could be wrong). See this FAQ for information on process affinity:
> The openmpi-default-hostfile says I should not specify localhost in
> there.. and let the job dispatcher/rca "detect" the single-node setup.
> Where should I define/configure system wide, that this is a
> single-node, 2-slot system?
> I would like to avoid making the system users be obliged to pass a
> hostfile to mpirun/mpiexec. I simply want mpiexec -n N ./mpi-task to
> do the propper job of _really_ spreading the processes evenly between
> all the system's CPUs.
> Best regards, waiting for your answer.
You could put localhost and specify the number of slots in the default
hostfile, or just pass a hostfile containing local host to mpirun.
By default Open MPI will run on the localhost assuming 1 slot if it does
not detect a resource manager or isn't passed a hostfile.
> ps.: should I upgrade to latest openMPI to have my problem
> "automagically" solved?
I would definitely update to a newer version. The 1.1 series has many
Hope this helps,