Wow. I am indeed on IB.

So a program that calls an MPI_Bcast, then does a bunch of setup work that should be done in parallel before re-synchronizing, in fact serializes the setup work? I see its not quite that bad - If I run my little program on 5 nodes, I get 0 immediately, 1,2 and 4 after 5 seconds and 3 after 10, revealing, I guess, the tree distribution.

Ticket 1224 isn't terribly clear - is this patch already in 1.2.6 or 1.2.7, or do I have to download source, patch and build?


Inactive hide details for Jeff Squyres ---09/17/2008 12:03:21 PM---Are you using IB, perchance?Jeff Squyres ---09/17/2008 12:03:21 PM---Are you using IB, perchance?

          Jeff Squyres <>
          Sent by:

          09/17/08 11:55 AM
          Please respond to
          Open MPI Users <>


Open MPI Users <>



Re: [OMPI users] Odd MPI_Bcast behavior

Are you using IB, perchance?

We have an "early completion" optimization in the 1.2 series that can  
cause this kind of behavior.  For apps that dip into the MPI layer  
frequently, it doesn't matter.  But for those that do not dip into the  
MPI layer frequently, it can cause delays like this.  See 
 for a few more details.

If you're not using IB, let us know.

On Sep 17, 2008, at 10:34 AM, Gregory D Abram wrote:

> I have a little program which initializes, calls MPI_Bcast, prints a  
> message, waits five seconds, and finalized. I sure thought that each  
> participating process would print the message immediately, then all  
> would wait and exit - thats what happens with mvapich 1.0.0.  On  
> OpenMPI 1.2.5, though, I get the message immediately from proc 0,  
> then 5 seconds later, from proc 1, and then 5 seconds later, it  
> exits- as if MPI_Finalize on proc 0 flushed the MPI_Bcast. If I add  
> a MPI_Barrier after the MPI_Bcast, it works as I'd expect. Is this  
> behavior correct? If so, I so I have a bunch of code to change in  
> order to work correctly on OpenMPI.
> Greg
> Here's the code:
> #include <stdlib.h>
> #include <stdio.h>
> #include <mpi.h>
> main(int argc, char *argv[])
> {
> char hostname[256]; int r, s;
> MPI_Init(&argc, &argv);
> gethostname(hostname, sizeof(hostname));
> MPI_Comm_rank(MPI_COMM_WORLD, &r);
> MPI_Comm_size(MPI_COMM_WORLD, &s);
> fprintf(stderr, "%d of %d: %s\n", r, s, hostname);
> int i = 99999;
> MPI_Bcast(&i, sizeof(i), MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
> // MPI_Barrier(MPI_COMM_WORLD);
> fprintf(stderr, "%d: got it\n", r);
> sleep(5);
> MPI_Finalize();
> }
> _______________________________________________
> users mailing list

Jeff Squyres
Cisco Systems

users mailing list