Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Newbie question continues, a step toward real app
From: Martin Siegert (siegert_at_[hidden])
Date: 2011-01-14 15:47:21


On Thu, Jan 13, 2011 at 05:34:48PM -0800, Tena Sakai wrote:
> Hi Gus,
>
> > Did you speak to the Rmpi author about this?
>
> No, I haven't, but here's what the author wrote:
> https://stat.ethz.ch/pipermail/r-sig-hpc/2009-February/000104.html
> in which he states:
> ...The way of spawning R slaves under LAM is not working
> any more under OpenMPI. Under LAM, one just uses
> R -> library(Rmpi) -> mpi.spawn.Rslaves()
> as long as host file is set. Under OpenMPI this leads only one R slave on
> the master host no matter how many remote hosts are specified in OpenMPI
> hostfile. ...
> His README file doesn't tell what I need to know. In the light of
> LAM MPI being "absorbed" into openMPI, I find this unfortunate.

Hmm. It has been a while that I had to compile Rmpi, but the following
works with openmpi-1.3.3, R-2.10.1:

# mpiexec -n 1 -hostfile mfile R --vanilla < Rmpi-hello.R

with a script Rmpi-hello.R like

library(Rmpi)
mpi.spawn.Rslaves()
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
mpi.close.Rslaves()
mpi.quit()

The only unfortunate effect is that by default mpi.spawn.Rslaves()
spawns as many slaves as there are lines in the hostfile, hence you
end up with one too many processes: 1 master + N slaves. You can repair
that by using

Nprocs <- mpi.universe.size()
mpi.spawn.Rslaves(nslaves=Nprocs-1)

instead of the simple mpi.spawn.Rslaves() call.

BTW: the whole script works in the same way when submitting under torque
using the TM interface and without specifying -hostfile ... on the
mpiexec command line.

Cheers,
Martin

-- 
Martin Siegert
Head, Research Computing
WestGrid/ComputeCanada Site Lead
IT Services                                phone: 778 782-4691
Simon Fraser University                    fax:   778 782-4242
Burnaby, British Columbia                  email: siegert_at_[hidden]
Canada  V5A 1S6