On 11/8/06, Miguel Figueiredo Mascarenhas Sousa Filipe
> On 11/8/06, Greg Lindahl <greg.lindahl_at_[hidden]> wrote:
> > On Tue, Nov 07, 2006 at 05:02:54PM +0000, Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
> > > if your aplication is on one given node, sharing data is better than
> > > copying data.
> > Unless sharing data repeatedly leads you to false sharing and a loss
> > in performance.
> what does that mean.. I did not understand that.
> > > the MPI model assumes you don't have a "shared memory" system..
> > > therefore it is "message passing" oriented, and not designed to
> > > perform optimally on shared memory systems (like SMPs, or numa-CCs).
> > For many programs with both MPI and shared memory implementations, the
> > MPI version runs faster on SMPs and numa-CCs. Why? See the previous
> > paragraph...
I miss understood what you've said.
The reasons I see for that are:
1) have aplication design MPI oriented...
and adapt that design to a shared memory implementation afterwards.
(using a mpi programming model on a shared memory aplication.)
2) cases where the problem space is better solved using a MPI
programming model oriented solution.
Shared memory, or multi-threading program development requires a
diferent programming model.
The MPI model can be better suited to some tasks, than the
> But, for instance.. try to benchmark real applications with a MPI and
> posix threads implementations in the same numa-cc or big SMP machine..
> my bet is that posix threads implementation is going to be faster..
> There are always exceptions.. like having a very well designed MPI
> application, but a terrible posix threads one.. or design that's just
> not that adaptable to a posix threads programming model (or a MPI
> Miguel Sousa Filipe
Miguel Sousa Filipe