Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] calling a parallel solver from sequential code
From: Florian Bruckner (e0425375_at_[hidden])
Date: 2013-11-19 05:55:07


On 11/18/2013 05:20 PM, Damien wrote:
> Florian,
>
> There's two ways. You can make your whole app MPI-based, but only
> have the master process do any of the sequential work, while the
> others spin until the parallel part. That's the easiest, but you then
> have MPI everywhere in your app. The other way is to have the MPI
> processes exist totally outside your main sequential process. This
> keeps you isolated from the MPI, but it's a lot more work.
>
> I've done the MPI on the outside with the MUMPS linear solver. You
> need to spin up the MPI process group separately, so your sequential
> process isn't doing any work while they're running. You also need to
> send data to the MPI processes, which I used Boost's Shared-Memory
> library for (if you can't use C++ in your project this won't work for
> you at all). You also have to keep the MPI processes and the main
> process synchronised and you need your main process to surrender it's
> core while the MPI solve is going on, so you end up with a bunch of
> Sleep or sched_yield calls so that everything plays nicely. The whole
> thing takes a *lot* of tweaking to get right. Honestly, it's a total
> pig and I'd recommend against this path (we don't use it anymore in
> our software).
that is exactly what i wanted to do! we already tested the serial
version of MUMPS in our code and it performs quite well. we use it to
solve a preconditioning system inside of CVODE (multistep
time-integration). cvode is more or less a black-box solver, which then
sometimes calls the preconditioning solver. There is an MPI version of
CVODE available, but i didn't want to parallelize that part of the code,
because it is not really significant. Furthermore i don't know what
happens internally and i would have to handle two different
parallelizations at once.

but thank you for your explaination
i didn't think that i would be that complicated, to get MPI working.

greetings
Florian