Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Odd MPI_Bcast behavior
From: Gregory D Abram (gabra_at_[hidden])
Date: 2008-09-17 12:31:54


Wow. I am indeed on IB.

So a program that calls an MPI_Bcast, then does a bunch of setup work that
should be done in parallel before re-synchronizing, in fact serializes the
setup work? I see its not quite that bad - If I run my little program on 5
nodes, I get 0 immediately, 1,2 and 4 after 5 seconds and 3 after 10,
revealing, I guess, the tree distribution.

Ticket 1224 isn't terribly clear - is this patch already in 1.2.6 or 1.2.7,
or do I have to download source, patch and build?

Greg

                                                                           
             Jeff Squyres
             <jsquyres_at_cisco.c
             om> To
             Sent by: Open MPI Users <users_at_[hidden]>
             users-bounces_at_ope cc
             n-mpi.org
                                                                   Subject
                                       Re: [OMPI users] Odd MPI_Bcast
             09/17/08 11:55 AM behavior
                                                                           
                                                                           
             Please respond to
              Open MPI Users
             <users_at_open-mpi.o
                    rg>
                                                                           
                                                                           

Are you using IB, perchance?

We have an "early completion" optimization in the 1.2 series that can
cause this kind of behavior. For apps that dip into the MPI layer
frequently, it doesn't matter. But for those that do not dip into the
MPI layer frequently, it can cause delays like this. See
http://www.open-mpi.org/faq/?category=openfabrics#v1.2-use-early-completion

  for a few more details.

If you're not using IB, let us know.

On Sep 17, 2008, at 10:34 AM, Gregory D Abram wrote:

> I have a little program which initializes, calls MPI_Bcast, prints a
> message, waits five seconds, and finalized. I sure thought that each
> participating process would print the message immediately, then all
> would wait and exit - thats what happens with mvapich 1.0.0. On
> OpenMPI 1.2.5, though, I get the message immediately from proc 0,
> then 5 seconds later, from proc 1, and then 5 seconds later, it
> exits- as if MPI_Finalize on proc 0 flushed the MPI_Bcast. If I add
> a MPI_Barrier after the MPI_Bcast, it works as I'd expect. Is this
> behavior correct? If so, I so I have a bunch of code to change in
> order to work correctly on OpenMPI.
>
> Greg
>
> Here's the code:
>
> #include <stdlib.h>
> #include <stdio.h>
> #include <mpi.h>
>
> main(int argc, char *argv[])
> {
> char hostname[256]; int r, s;
> MPI_Init(&argc, &argv);
>
> gethostname(hostname, sizeof(hostname));
>
> MPI_Comm_rank(MPI_COMM_WORLD, &r);
> MPI_Comm_size(MPI_COMM_WORLD, &s);
>
> fprintf(stderr, "%d of %d: %s\n", r, s, hostname);
>
> int i = 99999;
> MPI_Bcast(&i, sizeof(i), MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
> // MPI_Barrier(MPI_COMM_WORLD);
>
> fprintf(stderr, "%d: got it\n", r);
>
> sleep(5);
>
> MPI_Finalize();
> }
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Jeff Squyres
Cisco Systems
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users




graycol.gif
pic27111.gif
ecblank.gif