Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Reuti (reuti_at_[hidden])
Date: 2007-09-17 15:17:00


Am 17.09.2007 um 16:34 schrieb Brian Barrett:

> On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:
>
>> When launching an MPI program with mpirun on an xgrid cluster, is
>> there a way to cause the program being run to be temporarily
>> copied to
>> the compute nodes in the cluster when executed (i.e., similar to
>> what the
>> xgrid command line tool does)? Or is it necessary to make the program
>> being run available on every compute node (e.g., using NFS data
>> partions)?
>
> This is functionality we never added to our XGrid support. It
> certainly could be added, but we have an extremely limited supply of
> developer cycles for the XGrid support at the moment.

I think this should be implemented for all platforms, if it would
have to be part of OpenMPI at all (the parallel library Linda is
offering such a feature). Otherwise the option would be to submit the
job using XGrid, or any another queuingsystem, where you can setup
such file tranfers in any prolog script (and epilog to remove the
programs again) - or copy it to the created $TMPDIR which I would
suggest if you decide to use e.g. SUN GridEngine, as this will be
ereased after the job automatically.

But just for curiosity: how is XGrid handling it, as you refer to
command-line-tool? If you have a jobscript with three mpirun-commands
for three different programs, XGrid will transfer all three programs
to the nodes for this job, or is it limited to be just one mpirun is
just one XGrid job?

-- Reuti

>
> Brian
>
> --
> Brian W. Barrett
> Networking Team, CCS-1
> Los Alamos National Laboratory
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users