Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Marcus G. Daniels (mdaniels_at_[hidden])
Date: 2007-03-23 21:33:39

George Bosilca wrote:
> All in all we end up with a multi-hundreds KB library which in most
> of the applications will be only used at 10%.
Seems like it ought to be possible to do some coverage analysis for a
particular application and figure out what parts of the library (and
user code) to make adjacent in memory. Then the 10% could be put in the
same overlay. Seems like the EIB is quite fast and can take some abuse
in terms of swapping.
> Moreover, most
> of the Cell users we talked with, are not interested to have MPI
> between the SPU. There is only one thing they're looking for,
> removing the last unused SPU cycle from the pipeline !!! There is no
> room for anything MPI-like at that level.
I imagine that OpenMP might be good option for the Cell and even sounds
like maybe there will be a GCC option:

..but even so, there are more existing scientific codes for MPI than
OpenMP. Even if the thing was a dog initially, and yielded 2 speed
ups instead of 10 compared to typical CPUs, it would still be useful for
installations with large Cell deployments that could well be risking
underutilization or hogging due to poor tools support.

I have not investigated how much of the SPU C library stuff is missing
to make OpenMPI compile, but that's at least fixable and independently
useful thing to have for Cell users.