Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Ralph H Castain (rhc_at_[hidden])
Date: 2007-08-30 13:25:53

I take it you are running in an rsh/ssh environment (as opposed to a managed
environment like SLURM)?

I'm afraid that you have to tell us -all- of the nodes that will be utilized
in your job at the beginning (i.e., to mpirun). This requirement is planned
to be relaxed in a later version, but that won't be out for some time.

At the moment, there is no workaround.


On 8/30/07 9:51 AM, "Murat Knecht" <MKNECHT_at_[hidden]> wrote:

> Hi,
> I have a question regarding the --host(file) option of mpirun. Whenever I
> try to fork a process on another node using Spawn(), I get the following
> message:
> Verify that you have mapped the allocated resources properly using the
> --host specification.
> I understand this can be fixed by providing the hostnames which will be
> used either by --host or by using a hostfile containing the names and
> possibly the slots available.
> This may be an acceptable solution, if one wants to start the same process
> on several blades, but what about starting a parent process which then
> initiates different child processes on other blades?
> In this scenario mpirun initially does not need the information of which
> other blades exist, but is only supposed to start the parent process
> locally. Surely, there must be a way not to previously specify blades, but
> to load this information at runtime, especially in a changing landscape
> where nodes are added at runtime.
> Is there a way to avoid this --host option?
> I'm using the latest version of OpenMPI (1.2.3).
> Best regards,
> Murat
> _______________________________________________
> users mailing list
> users_at_[hidden]