Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RFC: Merge tmp fault recovery branch into trunk
From: Leonardo Fialho (leonardofialho_at_[hidden])
Date: 2010-02-25 11:54:59

Hi George,

>> Hum... I'm really afraid about this. I understand your choice since it is really a good solution for fail/stop/restart behaviour, but looking from the fail/recovery side, can you envision some alternative for the orted's reconfiguration on the fly?
> I don't see why the current code prohibit such behavior. However, I don't see right now in this branch how the remaining daemons (and MPI processes) reconstruct the communication topology, but this is just a technicality.
> Anyway, this is the code that UT will bring in. All our work focus on maintaining the exiting environment up and running instead of restarting everything. The orted will auto-heal (i.e reshape the underlying topology, recreate the connections, and so on), and the fault is propagated to the MPI layer who will take the decision on what to do next.

When you say MPI layer, what exactly it means? The MPI interface or the network stack which supports the MPI communication (BTL, PML, etc.)?

In my mind I see an orted failure (and all procs running under this deamon) as an environment failure which leads to job failures. Thus, to use a fail/recovery strategy, this daemons should be recovered (possibly relaunching and updating its procs/jobs structures) and after that all failed procs which are originally running under this daemon should be recovered also (maybe from a checkpoint, log optionally). Of course, in available, an spare orted could be used.

Regarding to the MPI application, probably this 'environment reconfiguration' requires updates/reconfiguration/whatever on the communication stack which supports the MPI communication (BTL, PML, etc.).

Are we thinking in the same direction or I have missed something in the way?

Best regards,