I have a general question about the best way to implement an openmpi
application, i.e the design of the application.
A machine (I call it the "server") should send to a cluster containing a lot
of processors (the "clients") regularly task to do (byte buffers from very
The server should send to each client a different buffer, then wait for each
client answers (buffer sent by each client after some processing), and
retrieve the result data.
First I made something looking like this.
On the server side: Send sequentially to each client buffers using MPI_Send.
On each client side: loop which waits a buffer using MPI_Recv, then process
the buffer and sends the result using MPI_Send
This is really not efficient because a lot of time is lost due to the fact
that the server sends and receives sequentially the buffers.
It only has the advantage to have on the client size a pretty easy
Wait for buffer (MPI_Recv) -> Analyse it -> Send result (MPI_Send)
My wish is to mix MPI_Send/MPI_Recv and other mpi functions like
MPI_BCast/MPI_Scatter/MPI_Gather... (like I imagine every mpi application
The problem is that I cannot find a easy solution in order that each client
knows which kind of mpi function is currently called by the server. If the
server calls MPI_BCast the client should do the same. Sending at each time a
first message to indicate the function the server will call next does not
look very nice. Though I do not see an easy/best way to implement an
"adaptative" scheduler on the client side.
Any tip, advice, help would be appreciate.