This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Josh Hursey wrote:
> On Apr 29, 2008, at 12:55 AM, Sharon Brunett wrote:
>> I'm finding that using ompi-checkpoint on an application which is
>> very cpu bound takes a very very long time. For example, trying to
>> checkpoint a 4 or 8 way Pallas MPI Benchmark application can take
>> more than an hour. The problem is not where I'm dumping checkpoints
>> (I've tried local and an nfs mount with plenty of space, and cpu
>> intensive apps checkpoint quickly).
>> I'm using BLCR_VERSION=0.6.5 and openmpi-1.3a1r18241.
>> Is this condition common and if so, are there possibly mca paramters
>> which could help?
> It depends on how you configured Open MPI with checkpoint/restart.
> There are two modes of operation: No threads, and with a checkpoint
> thread. They are described a bit more in the Checkpoint/Restart Fault
> Tolerance User's Guide on the wiki:
> By default we compile without the checkpoint thread. The restriction
> he is that all processes must be in the MPI library in order to make
> progress on the global checkpoint. For CPU intensive applications this
> may cause quite a delay in the time to start, and subsequently finish,
> a checkpoint. I'm guessing that this is what you are seeing.
> If you configure with the checkpoint thread (add '--enable-mpi-threads-
> --enable-ft-thread' to ./configure) then Open MPI will create a thread
> that runs with each application process. This thread is fairly light
> weight and will make sure that a checkpoint progresses even when the
> process is not in the Open MPI library.
> Try enabling the checkpoint thread and see if that helps improve the
> checkpoint time.
First...please pardon the blunder in my earlier mail. Comms bound apps
are the ones taking a while to checkpoint, not cpu bound. In any case, I
tried configuring with the above two configure options but still no luck
on improving checkpointing times or gaining completion on larger mpi
task runs being checkpointed.
It looks like the checkpointing is just hanging. For example, I can
checkpoint a 2 way comms bound code (1 task on two nodes) ok. When I ask
for a 4 way run on 2 nodes, 30 minutes after the ompi-checkpoint PID
only see 1 ckpt directory with data in it!
-bash-2.05b$ ls -l *
-rw------- 1 sharon shc-support 1907476 2008-04-29 10:49
-rw-r--r-- 1 sharon shc-support 33 2008-04-29 10:49 snapshot_meta.data
The file system getting the checkpoints is local. I've tried /scratch
and others as well.
I can checkpoint some codes (like xhpl) just fine across 8 mpi tasks ( t
nodes), dumping 254M total. Thus, the very long/stuck checkpointing
seems rather application dependent.
Here's how I configured openmpi
--enable-mpi-threads --enable-ft-thread --with-ft=cr --enable-shared
Thanks for any further insights you may have.