Are you overwhelming the receiver with short, unexpected messages such that MPI keeps mallocing and mallocing and mallocing in an attempt to eagerly receive all the messages? I ask because Open MPI only eagerly sends short messages -- long messages are queued up at the sender and not actually transferred until the receiver starts to receive (aka a "rendezvous protocol").
While that *can* happen, I'd be a little surprised if it did. Indeed, it would probably take a little while for that to happen (i.e., the time necessary for the receiver to malloc a small amount N times, where N is large enough to exhaust the virtual memory on your machine, coupled with all the time delay to page out all the old memory and page in on-demand as Open MPI scans for new incoming matches... this could be pretty darn slow). Is that what is happening?
Are you sure that you don't have some other kind of memory error in your application?
FWIW, you can use MPI_SSEND to do a "synchronous" send, which means that it won't complete until the receiver has started to receive the message. This may slow your sender down dramatically, however. If it slows down your sender too much, you may have to implement your own flow control.
On Aug 25, 2011, at 10:58 PM, Rodrigo Oliveira wrote:
> Hi there,
> I am facing some problems in an Open MPI application. Part of the application is composed by a sender and a receiver. The problem is that the sender is so much faster than the receiver, what causes the receiver's memory to be completely used, aborting the application.
> I would like to know if there is a flow control scheme implemented in open mpi or if this issue have to be treated at the user application's layer. If exists, how it works and how can I use it in my application?
> I did some research about this subject, but I did not find a conclusive explanation.
> Thanks a lot.
> users mailing list
For corporate legal information go to: