Re: MPI_Ssend(). This indeed fixes bug3, the process at rank 0 has
reasonable memory usage and the execution proceeds normally.
Re scalable: One second. I know well bug3 is not scalable, and when to
use MPI_Isend. The point is programmers want to count on the MPI spec as
written, as Richard pointed out. We want to send small messages quickly
and efficiently, without the danger of overloading the receiver's
resources. We can use MPI_Ssend() but it is slow compared MPI_Send().
Since identifying this behavior we have implemented the desired flow
control in our application.
From: Brightwell, Ronald [mailto:rbbrigh_at_[hidden]]
Sent: Monday, February 04, 2008 4:35 PM
To: Sacerdoti, Federico
Cc: Open MPI Users
Subject: Re: [OMPI users] openmpi credits for eager messages
On Mon Feb 4, 2008 14:23:13... Sacerdoti, Federico wrote
> To keep this out of the weeds, I have attached a program called "bug3"
> that illustrates this problem on openmpi 1.2.5 using the openib BTL.
> bug3 process with rank 0 uses all available memory buffering
> "unexpected" messages from its neighbors.
> Bug3 is a test-case derived from a real, scalable application (desmond
> for molecular dynamics) that several experienced MPI developers have
> worked on. Note the MPI_Send calls of processes N>0 are *blocking*;
> openmpi silently sends them in the background and overwhelms process 0
> due to lack of flow control.
This looks like an N->1 communication pattern to me. This is the
> It may not be hard to change desmond to work around openmpi's small
> message semantics, but a programmer should reasonably be allowed to
> think a blocking send will block if the receiver cannot handle it yet.
It's actually pretty easy -- change MPI_Send() to MPI_Ssend().
It sounds like you may be confused by what the term "blocking" means in