This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On 12/20/06, Harakiri <harakiri_23_at_[hidden]> wrote:
> I will study through the suggested paper, however
> i actually read a different paper which suggested
> using less messages, i would imagine that for arrays
> of numbers lets say 100 Millions - the network
> messages become the critical factor.
It depends completely on your network topology and technology (ie.
bandwidth and latency). It's very hard to predict a generic behaviour
other than: "more data is worse".
Ethernet is quite good at bandwidth but not at latency so a few big
chunks are better than lots of small chunks but it also depends how
the network is carrying your packages along the way.
The network is a critical factor only if it's running time is
comparable or greater than the processing time. Copying 1Mb between
nodes is critical for a nanosecond computation but not if it'll take
Reclaim your digital rights, eliminate DRM, learn more at