Thanks for the reply. I've somewhat figured it out the reason. It seems that when a non-blocking send is posted, mpi doesn't spawn another process that takes care the sending. The sending occurs only when the processor is idle. Since the data I sent was immense (20000) elements, there wasn't sufficient free processor time during the sleep_fortran command to send the entire vector. When I changed the send and receive pair to work only with a scalar, I got the expected behavior.
Eugene, you're absolutely right about Iprobe returning true when the message header is received. Even when Iprobe returns true doesn't mean the message buffer can be read. This is not quite what I wanted, so I think I'll be using non-blocking receive and periodic mpi_test polling instead.
I'll take a stab at this since I don't remember seeing any other replies.
At least in the original code you sent out, you used Isend/sleep/Wait to send messages. So, I'm guessing that part of the message is sent, Iprobe detects that a matching message is in-coming, and then the receiver goes into MPI_Recv. This call then can begin to receive the message, but it's stuck waiting for the remainder of the message to arrive. Off hand, I don't know if that is how MPI_Iprobe is supposed to behave or not.
MPI_IPROBE(source, tag, comm, flag, status) returns flag = true if there is a message that can be received and that matches the pattern specified by the arguments source, tag, and comm.
I suppose this language leaves open the possibility that the message is not yet 100% available to be read, but only that the message header has been received and that it sufficiently matches that specified matching criteria.
David Zhang wrote:I have modified the code so that all the terminal outputs are done by one executable. I have attached the source files, after compiling type "make go" and the code will execute.
The previous code output was from a supercomputer cluster where the two processes resides on two different nodes. When running the same code on a regular-multiprocessor machine (mac mini in this case), I got this output:
If I'm sending a message every 2 seconds and I'm polling every 0.05 second, I would expect 39 F and 1 T between each number. At least when I ran it on the supercomputer, this is true during the very beginning; however I don't even see this when I'm running the code on my mac mini.
On Sat, Jun 5, 2010 at 2:44 PM, David Zhang <email@example.com> wrote:
I'm using mpi_iprobe to serve as a way to send signals between different mpi executables. I'm using the following test codes (fortran):
real*8 :: vec(20000)=1.0
integer :: ierr,i=0,request(1)
end program send
real*8 :: vec(20000)
integer :: ierr
logical :: key_present
key_present = .false.
print *, key_present
end function key_present
end program send
The usleep_fortran is a routine I've written to pause the program for that amount of time (in seconds). As you can see, on the receiving end I'm probing to see whether the message has being received every 0.05 seconds, where each probing would result a print of the probing result; while the sending is once every 2 seconds.
mpirun -np 1 recv : -np 1 send
Naturally I expect the output to be something like:
(fourty or so F)
(another fourty or so F)
however this is the output I get:
(fourty or so F)
(about a two second delay)
It seems to me that after the first set of probes, once the message was received, the non-blocking mpi probe becomes blocking for some strange reason. I'm using mpi_iprobe for the first time, so I'm not sure if I'm doing something blatantly wrong.
users mailing list