Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jean-Christophe Hugly (jice_at_[hidden])
Date: 2006-02-02 19:41:51


On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote:
> By using slots=4 you are telling Open MPI to put the first 4
> processes on the "bench1" host.
> Open MPI will therefore use shared memory to communicate between the
> processes not Infiniband.

Well, actually not, unless I'm mistaken about that. In my
mca-params.conf I have :

rmaps_base_schedule_policy = node

That would spread processes over nodes, right ?

> You might try:
>
>
> mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
> 2 -d xterm -e gdb PMB-MPI1

Thanks for the tip. The last time I tried this it took quite a few
attempts before getting it right. As I did not remember the magic trick,
I was somewhat reluctant to go in that direction. Since you just handed
me the recipe on a sliver plate, I'll do it.

J-C