What version of OMPI are you running? We stopped supporting bproc after the 1.2 series, though you could always launch via ssh.

On Dec 12, 2012, at 10:25 PM, Ng Shi Wei <nsw_1216@hotmail.com> wrote:

Dear all,

I am new in Linux and clustering. I am setting up a Beowulf Cluster using several PCs according to this guide http://www.tldp.org/HOWTO/html_single/Beowulf-HOWTO/.

I have setup and configure accordingly except for NFS part. Because I am not requiring it for my application. I have set my ssh to login each other without password. I started with 2 nodes 1st. I can compile and run in my headnode using openmpi. But when I try to run my MPI application across nodes, there is nothing displaying. It just like hanging there.

Headnode: master
client: slave4

The command I used to mpirun across nodes is as below:
mpirun -np 4 --host slave4 output
Since I not using NFS, so I installed OpenMPI in every nodes with same locations. 

I wondering I missed out any configurations or not.

Hope someone can help me out of this problem.

Thanks in advance.

Best Regards,
Shi Wei
users mailing list