George Bosilca wrote:
The paper you cited, while presenting a particular implementation doesn't bring present any new ideas. The compression of the data was studied for long time, and [unfortunately] it always came back to the same result. In the general case, not worth the effort !

Now of course, if one limit itself to very regular applications (such as the one presented in the paper), where the matrices involved in the computation are well conditioned (such as in the paper), and if you only use MPI_DOUBLE (\cite{same_paper}), and finally if you only expect to run over slow Ethernet (1Gbs) (\cite{same_paper_again})... then yes one might get some benefit.

Yes, you are probably right that its not worth the effort in general and
especially not in HPC environments where you have very fast network.

But I can think of (rather important) special cases where it is important

- non HPC environments with slow network: beowulf clusters and/or
  internet + normal PCs where you use existing workstations and network
  for computations.
- communication/io-bound computations where you transfer
  large redundant datasets between nodes

So it would be nice to be able to turn on the compression (for spefic
communicators and/or data transfers) when you need it.

-- 
Tomas
  george.

On Apr 22, 2008, at 9:03 AM, Tomas Ukkonen wrote:

Hello

I read from somewhere that OpenMPI supports
some kind of data compression but I couldn't find
any information about it.

Is this true and how it can be used?

Does anyone have any experiences about using it?

Is it possible to use compression in just some
subset of communications (communicator
specific compression settings)?

In our MPI application we are transferring large
amounts of sparse/redundant data that compresses
very well. Also my initial tests showed significant
improvements in performance.

There are also articles that suggest that compression
should be used [1].

[1] J. Ke, M. Burtcher and E. Speight.
Runtime Compression of MPI Messages to Improve the
Performance and Scalability of Parallel Applications.


Thanks in advance,
Tomas

_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________ users mailing list users@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users