On 12/17/07 8:19 AM, "Elena Zhebel" <ezhebel_at_[hidden]> wrote:
> Hello Ralph,
> Thank you for your answer.
> I'm using OpenMPI 1.2.3. , compiler glibc232, Linux Suse 10.0.
> My "master" executable runs only on the one local host, then it spawns
> "slaves" (with MPI::Intracomm::Spawn).
> My question was: how to determine the hosts where these "slaves" will be
> You said: "You have to specify all of the hosts that can be used by
> your job
> in the original hostfile". How can I specify the host file? I can not
> find it
> in the documentation.
Hmmm...sorry about the lack of documentation. I always assumed that the MPI
folks in the project would document such things since it has little to do
with the underlying run-time, but I guess that fell through the cracks.
There are two parts to your question:
1. how to specify the hosts to be used for the entire job. I believe that is
somewhat covered here:
That FAQ tells you what a hostfile should look like, though you may already
know that. Basically, we require that you list -all- of the nodes that both
your master and slave programs will use.
2. how to specify which nodes are available for the master, and which for
You would specify the host for your master on the mpirun command line with
mpirun -n 1 -hostfile my_hostfile -host my_master_host my_master.exe
This directs Open MPI to map that specified executable on the specified host
- note that my_master_host must have been in my_hostfile.
Inside your master, you would create an MPI_Info key "host" that has a value
consisting of a string "host1,host2,host3" identifying the hosts you want
your slave to execute upon. Those hosts must have been included in
my_hostfile. Include that key in the MPI_Info array passed to your Spawn.
We don't currently support providing a hostfile for the slaves (as opposed
to the host-at-a-time string above). This may become available in a future
release - TBD.
Hope that helps
> Thanks and regards,
> -----Original Message-----
> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
> Behalf Of Ralph H Castain
> Sent: Monday, December 17, 2007 3:31 PM
> To: Open MPI Users <users_at_[hidden]>
> Cc: Ralph H Castain
> Subject: Re: [OMPI users] MPI::Intracomm::Spawn and cluster
> On 12/12/07 5:46 AM, "Elena Zhebel" <ezhebel_at_[hidden]> wrote:
>> I'm working on a MPI application where I'm using OpenMPI instead of
>> In my "master" program I call the function MPI::Intracomm::Spawn which
>> "slave" processes. It is not clear for me how to spawn the "slave"
>> over the network. Currently "master" creates "slaves" on the same
>> If I use 'mpirun --hostfile openmpi.hosts' then processes are spawn
>> network as expected. But now I need to spawn processes over the
>> my own executable using MPI::Intracomm::Spawn, how can I achieve it?
> I'm not sure from your description exactly what you are trying to do,
> nor in
> what environment this is all operating within or what version of Open
> you are using. Setting aside the environment and version issue, I'm
> that you are running your executable over some specified set of hosts,
> want to provide a different hostfile that specifies the hosts to be
> used for
> the "slave" processes. Correct?
> If that is correct, then I'm afraid you can't do that in any version
> of Open
> MPI today. You have to specify all of the hosts that can be used by
> your job
> in the original hostfile. You can then specify a subset of those hosts
> to be
> used by your original "master" program, and then specify a different
> to be used by the "slaves" when calling Spawn.
> But the system requires that you tell it -all- of the hosts that are
> to be used at the beginning of the job.
> At the moment, there is no plan to remove that requirement, though
> there has
> been occasional discussion about doing so at some point in the future.
> promises that it will happen, though - managed environments, in
> currently object to the idea of changing the allocation on-the-fly. We
> though, make a provision for purely hostfile-based environments (i.e.,
> unmanaged) at some time in the future.
>> Thanks in advance for any help.
>> users mailing list
> users mailing list
> users mailing list