Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Comm_split
From: Bill Rankin (Bill.Rankin_at_[hidden])
Date: 2010-11-30 10:50:11

> The tree is not symmetrical in that the valid values for the 10th
> parameter depends on the values selected in the 0th to 9th parameter
> (all the ancestry in the tree), for e.g., we may have a lot of nodes in
> the left of the tree than in the right, see attachment ( I hope they're
> allowed )

Which is why you don't have the master hand out all the work at once. Instead, it hands out a small(er) piece of work to each node from a large list where the length of the list is significantly larger than the number of nodes. As each node finishes processing the bit of work it was given, it sends a message back to the master with its results and asks for more work. You repeat until all data has been processes.

Eg, say you are looking to search through all possible combinations for 10 parameters (n0,...,n9). The master would generate all possible combinations for the first 3 parameters (n0,n1,n2) and then for every element in that list, start sending them to the slave process who will use that as a basis vector for searching the rest of the space (n3,...,n9). As each slave finishes, it asks the master for another basis vector to work on.

Lather, rinse, repeat until finished.

If you keep the number of basis vectors much higher than the number of slaves (like 100x bigger) the code sill load-balance itself, since it really doesn't matter the order in which they finish processing a single basis as long as they are all kept busy.

I used this approach many years ago searching for numerical sequences known as Golomb Rulers. Email me off list and I can give you some pointers to references.

Good luck,