Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] calculation progress status
From: MM (finjulhich_at_[hidden])
Date: 2013-10-21 10:58:30


On 21 October 2013 15:19, Andreas Schäfer <gentryx_at_[hidden]> wrote:

> Hi,
>
> the solution depends on the details of your code. Will all clients
> send their progress updates simultaneously? Are you planning for few
> or many nodes?
>
> For few nodes and non-simultaneous updates you could loop on the root
> while receiving from MPI_ANY. Clients could send out their updates via
> MPI_Isend().
>
> If you're expecting many nodes, this 1-n schema will eventually
> overwhelm the root node. In that case MPI_Gather() or MPI_Reduce()
> will perform better. But those require all nodes to participate.
>
> Things get complicated if you want non-simultaneous updates from many
> nodes...
>
> HTH
> -Andreas

Thanks, currently I run a prototype with 32 mpi processes or so. But I
would deploy to a larger set later.

===> root process code:
I) mpi thread
1. list all n-tuples
2. split list equally for 32 processes
3. scatter
4. loop to evaluate locally f for my section of space
5. reduce

II) UI thread

===> compute mpi process node
3. scatter list of ntuples
4. loop to evaluate locally f for my section of space
5. reduce

The loops 4 are not naturally in sync.
Would you suggest to modify the loop to do a MPI_ISend after x iterations
(for the clients) and MPI_IRecv on the root?

Thanks MM