You can do it, it might or might not make sense, depending on your
application. Load imbalance in regular MPI applications kills
performance. Therefore if your cluster is very heterogeneous, you
might prefer some different programming paradigm that take care of
this by nature (let say RPC). However if you already have an
application written in MPI, you can try it anyway. It might not be
"efficient" but should still be way faster than sequential run.
For embarrassingly parallel jobs you definitely do not need MPI
(though you can also use if on that purpose). Take a look at tools
like Boinc or XtremWeb that helps deploying a grid of "volunteer pc".
Le 18 janv. 08 à 11:54, Antoine Monmayrant a écrit :
> Hi everyone,
> I am new to open-mpi and parallel computing so I hope I won't
> bore/offend you with obvious/off-topic questions.
> We are running scientific simulations (using meep from mit) on small
> bi-processors pcs and to fully use both processors on each machine, we
> had to compile a mpi version of the soft.
> Compiling and running the app (meep-mpi) with mpirun were both fine.
> Now, we wonder if we can do a bit more by exploiting the unused
> computing power that is available on our lab network during night and
> The problem is that even if our network is more than decent, it not
> what you can find in a cluster. What's more, the various computers we
> could use are quite different (proc, ram, overall performances).
> Taking this into account, do you think we can use open-mpi over such a
> a) for one long simulation to share on the different "nodes"?
> b) for embarrassingly parallel simulations, that is for N independent
> simulations that we want to "spread" over the network, for example
> running one simulation on each available node?
> What kind of gain/limitations can we expect for both cases?
> If open-mpi is not the way forward, do you have an alternative to
> Thanks in advance for your help,
> Antoine Monmayrant
> LAAS - CNRS
> 7 avenue du Colonel Roche
> 31077 TOULOUSE Cedex4
> Tel:+33 5 61 33 64 59
> email : antoine.monmayrant_at_[hidden]
> permanent email : antoine.monmayrant_at_[hidden]
> users mailing list