This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
My question is why? If you are willing to reserve a chunk of your
machine for yet-to-exist tasks, why not just create them all at mpirun
time and slice and dice your communicators as appropriate?
On Thu, 2010-01-28 at 09:24 +1100, Jaison Paul wrote:
> Hi, I am just reposting my early query once again. If anyone one can
> give some hint, that would be great.
> Thanks, Jaison
> Jaison Paul wrote:
> > Hi All,
> > I am trying to use MPI for scientific High Performance (hpc)
> > applications. I use MPI_Spawn to create child processes. Is there a
> > way to start child processes early than the parent process, using
> > MPI_Spawn?
> > I want this because, my experiments showed that the time to spawn the
> > children by parent is too long for HPC apps which slows down the whole
> > process. If the children are ready when parent application process
> > seeks for them, that initial delay can be avoided. Is there a way to
> > do that?
> > Thanks in advance,
> > Jaison
> > Australian National University
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> users mailing list