On 10 February 2010 14:19, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> On Feb 10, 2010, at 11:59 AM, Lisandro Dalcin wrote:
>> > If I remember correctly, the HPCC pingpong test synchronizes occasionally by
>> > having one process send a zero-byte broadcast to all other processes.
>> > Â What's a zero-byte broadcast? Â Well, some MPIs apparently send no data, but
>> > do have synchronization semantics. Â (No non-root process can exit before the
>> > root process has entered.) Â Other MPIs treat the zero-byte broadcasts as
>> > no-ops; Â there is no synchronization and then timing results from the HPCC
>> > pingpong test are very misleading. Â So far as I can tell, the MPI standard
>> > doesn't address which behavior is correct.
>> Yep... for p2p communication things are more clear (and behavior more
>> consistens in the MPI's out there) regarding zero-length messages...
>> IMHO, collectives should be non-op only in the sense that no actual
>> reduction is made because there are no elements to operate on. I mean,
>> if Reduce(count=1) implies a sync, Reduce(count=0) should also imply a
> Sorry to disagree again. Â :-)
> The *only* MPI collective operation that guarantees a synchronization is barrier. Â The lack of synchronization guarantee for all other collective operations is very explicit in the MPI spec.
> Hence, it is perfectly valid for an MPI implementation to do something like a no-op when no data transfer actually needs to take place
So you say that an MPI implementation is free to do make a sync in
case of Bcast(count=1), but not in the case of Bcast(count=0) ? I
could agree that such behavior is technically correct regarding the
MPI standard... But it makes me feel a bit uncomfortable... OK, in the
end, the change on semantic depending on message sizes is comparable
to the blocking/nonblocking one for MPI_Send(count=10^8) versus
> (except, of course, the fact that Reduce(count=1) isn't defined ;-) ).
You likely meant Reduce(count=0) ... Good catch ;-)
PS: The following question is unrelated to this thread, but my
curiosity+laziness cannot resist... Does Open MPI has some MCA
parameter to add a synchronization at every collective call?
Centro Internacional de MÃ©todos Computacionales en IngenierÃa (CIMEC)
Instituto de Desarrollo TecnolÃ³gico para la Industria QuÃmica (INTEC)
Consejo Nacional de Investigaciones CientÃficas y TÃ©cnicas (CONICET)
PTLC - GÃ¼emes 3450, (3000) Santa Fe, Argentina