On Apr 3, 2006, at 3:02 PM, Brian Barrett wrote:
> On Apr 3, 2006, at 2:50 PM, Rolf Vandevaart wrote:
>> From what I have read from the Open MPI documentation, it seems
>> that the recommendation is to install Open MPI on an NFS server
>> that is accessible to all the nodes in the cell.
>> Are there any cases where it is recommended to install Open MPI
>> locally on all the nodes in the cell instead? Maybe in the case of
>> clusters if one is concerned about NFS traffic?
> Sure, installing on each node individually has it's advantages -
> namely drastically reducing NFS traffic. I think any suggestion of
> installing in NFS was mainly because 1) it's easier and 2) it's less
> likely to be messed up because of version mismatches. But for those
> that are careful to keep their nodes in sync, there's no reason not
> to install Open MPI on local disk.
Configuring a rsync script in cron is a good way to sync machines in
a cluster. One rsync for each directory works nicely. Update the
master and at the next update (or manually) every thing is up-to-date
in the chosen directory(ies). It's best if the script exists on the
client nodes rather then the master in case a client is powered down
or different architectures. Simplified my life greatly.