I am new to open-mpi and parallel computing so I hope I won't
bore/offend you with obvious/off-topic questions.
We are running scientific simulations (using meep from mit) on small
bi-processors pcs and to fully use both processors on each machine, we
had to compile a mpi version of the soft.
Compiling and running the app (meep-mpi) with mpirun were both fine.
Now, we wonder if we can do a bit more by exploiting the unused
computing power that is available on our lab network during night and
The problem is that even if our network is more than decent, it not near
what you can find in a cluster. What's more, the various computers we
could use are quite different (proc, ram, overall performances).
Taking this into account, do you think we can use open-mpi over such a
a) for one long simulation to share on the different "nodes"?
b) for embarrassingly parallel simulations, that is for N independent
simulations that we want to "spread" over the network, for example
running one simulation on each available node?
What kind of gain/limitations can we expect for both cases?
If open-mpi is not the way forward, do you have an alternative to propose?
Thanks in advance for your help,
LAAS - CNRS
7 avenue du Colonel Roche
31077 TOULOUSE Cedex4
Tel:+33 5 61 33 64 59
email : antoine.monmayrant_at_[hidden]
permanent email : antoine.monmayrant_at_[hidden]