Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Question about Asynchronous collectives
From: Richard Treumann (treumann_at_[hidden])
Date: 2010-09-23 10:54:59


request_1 and request_2 are just local variable names.

The only thing that determines matching order is CC issue order on the
communicator. At each process, some CC is issued first and some CC is
issued second. The first issued CC at each process will try to match the
first issued CC at the other processes. By this rule,
rank 0:
MPI_Ibcast; MPI_Ibcast
Rank 1;
MPI_Ibcast; MPI_Ibcast
is well defined and

rank 0:
MPI_Ibcast; MPI_Ireduce
Rank 1;
MPI_Ireducet; MPI_Ibcast
is incorrect.

I do not agree with Jeff on this below. The Proc 1 case where the
MPI_Waits are reversed simply requires the MPI implementation to make
progress on both MPI_Ibcast operations in the first MPI_Wait. The second
MPI_Wait call will simply find that the first MPI_Ibcast is already done.
The second MPI_Wait call becomes, effectively, a query function.

proc 0:
MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
MPI_Wait(&request_1, ...);
MPI_Wait(&request_2, ...);

proc 1:
MPI_IBcast(MPI_COMM_WORLD, request_2) // first Bcast
MPI_IBcast(MPI_COMM_WORLD, request_1) // second Bcast
MPI_Wait(&request_1, ...);
MPI_Wait(&request_2, ...);

That may/will deadlock.

Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

From:
Jeff Squyres <jsquyres_at_[hidden]>
To:
Open MPI Users <users_at_[hidden]>
Date:
09/23/2010 10:13 AM
Subject:
Re: [OMPI users] Question about Asynchronous collectives
Sent by:
users-bounces_at_[hidden]

On Sep 23, 2010, at 10:00 AM, Gabriele Fatigati wrote:

> to be sure, if i have one processor who does:
>
> MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
> MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
>
> it means that i can't have another process who does the follow:
>
> MPI_IBcast(MPI_COMM_WORLD, request_2) // firt Bcast for another process
> MPI_IBcast(MPI_COMM_WORLD, request_1) // second Bcast for another
process
>
> Because first Bcast of second process matches with first Bcast of first
process, and it's wrong.

If you did a "waitall" on both requests, it would probably work because
MPI would just "figure it out". But if you did something like:

proc 0:
MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
MPI_Wait(&request_1, ...);
MPI_Wait(&request_2, ...);

proc 1:
MPI_IBcast(MPI_COMM_WORLD, request_2) // first Bcast
MPI_IBcast(MPI_COMM_WORLD, request_1) // second Bcast
MPI_Wait(&request_1, ...);
MPI_Wait(&request_2, ...);

That may/will deadlock.

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users