Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Ralph H Castain (rhc_at_[hidden])
Date: 2007-02-13 14:27:07


Oh, I should have made something clear - I believe those command line
options aren't available in the 1.1 series. You'll have to upgrade to 1.2
(available in beta at the moment).

On 2/13/07 12:20 PM, "Ralph H Castain" <rhc_at_[hidden]> wrote:

>
>
>
> On 2/13/07 11:30 AM, "Brock Palen" <brockp_at_[hidden]> wrote:
>
>> On Feb 13, 2007, at 12:55 PM, Troy Telford wrote:
>>
>>> First, the good news:
>>> I've recently tried PBS Pro 8 with Open MPI 1.1.4.
>>>
>>> At least with PBS Pro version 8, you can (finally) do a dynamic/shared
>>> object for the TM module, rather than having to compile everything
>>> statically. (So the FAQ needs a minor update.) The jobs seem to
>>> run and
>>> use TM properly.
>>
>> Good,
>>
>>>
>>> The bad news:
>>> My memory is a bit fuzzy on how to use OMPI with PBS and cousins.
>>> Sad, I
>>> know, but that doesn't make it any less true.
>>
>> Make sure your OMPI build used --with-tm=/path/to/pbs
>>
>> You can also use ompi_info and grep for tm.
>>
>>>
>>> For the moment, I've read the FAQ and see that you need to use the
>>> '-np
>>> <foo>' option to specify the number of processes. For some reason, I
>>> recall that OMPI used to be able to get the number of processes to run
>>> from PBS; am I just 'remembering' something that never existed?
>>
>> To my memory this has never been the case. there is $PBS_NODEFILE
>> which you can wrap around:
>>
>> CPUS=`cat $PBS_NODEFILE | wc -l`
>>
>> mpirun -np $CPUS myexe
>
> Actually, that isn't the complete story. Open MPI will automatically run in
> several ways:
>
> 1. one proc in each available slot on every node in your allocation: just
> don't include -np on your command line. You can rank them by slot (--byslot
> or leave out) or by node (--bynode).
>
> 2. one proc on each node in your allocation: use --pernode on your command
> line. You can limit the number of nodes used by combining --pernode with -np
> <foo> - we will launch <foo> procs, one per node
>
> 3. a specified number of procs on every node: use --npernode <n>. Again, you
> can limit the number of procs launched by combining it with -np and can rank
> by slot or node
>
> Ralph
>
>>
>>> --
>>> Troy Telford
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>
>>
>>
>> Brock Palen
>> Center for Advanced Computing
>> brockp_at_[hidden]
>> (734)936-1985
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users