Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] few Problems
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-04-24 16:13:17

On Apr 23, 2009, at 3:59 PM, Luis Vitorio Cargnini wrote:

> I'm using NFS, my home dir is the same in all nodes the problem is
> when generating the key it is been generated for a specific machine
> end of the key is the user_at_host, the system is consulting id_dsa in
> each machine.

That's ok. I have a similar setup: svbu-mpi is my cluster "head node"
and that's where I generated my DSA key. So my file looks
like this:

[13:05] svbu-mpi:~/hg % cat ~/.ssh/
+IpBwD318AjraZtJXlIb03tkX7l2gZNncwOmzFbwqGwypD3YtHAY3j1 jsquyres_at_svbu-
[13:05] svbu-mpi:~/hg %

And that same $HOME/.ssh/ (and corresponding $HOME/.ssh/
id_dsa) file is available on all my nodes via NFS. The email address
at the end is not really part of the key; it's just there for human
reference for you to remember where it came from. It doesn't affect
the authentication at all.

> so to fix the problem since my applications are launch from node srv0
> I just create the keys in node 0 and that is it start to work in to
> connect in the others node, the problem is the reverse path I can't
> access from srv1 srv0 for example.

Why not? If you copy your file to authorized_keys, it
should Just Work (assuming the permissions are all set correctly:

- $HOME/.ssh needs, owned by you, 0700
- $HOME/.ssh/authorized_keys owned by you, 0600
- $HOME/.ssh/ owned by you, 0644
- $HOME/.ssh/id_dsa owned by you, 0600

The SSH setup HOWTOs and recipes sent in this thread (I assume) must
talk about such things..?

> The point is working from node0, the connections trough ssh. Now the
> execution it start but do not stop, like keep running ad infinitum,
> any ideas ?
> mpirun -d -v -hostfile chosts -np 35 ~/mpi/hello
> [cluster-srv0:29466] procdir: /tmp/openmpi-sessions-
> lvcargnini_at_cluster-
> srv0_0/44411/0/0

Are you able to run non-MPI apps through mpirun? For example:

mpirun -d -v -hostfile chosts hostname | sort

If that works, then did you compile "hello" correctly (e.g., with
mpicc)? I assume this is a simple "hello world" kind of MPI program
-- calls MPI_INIT, maybe MPI_COMM_RANK and MPI_COMM_SIZE, and

Do you have TCP firewalling disabled on all of your cluster nodes?

Jeff Squyres
Cisco Systems