Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Option to use only 7 cores out of 8 on each node
From: Addepalli, Srirangam V (srirangam.v.addepalli_at_[hidden])
Date: 2010-03-02 21:07:23


It works after creating a new pe and even from the command prompt with out using SGE.
Thanks
Rangam
________________________________________
From: users-bounces_at_[hidden] [users-bounces_at_[hidden]] On Behalf Of Reuti [reuti_at_[hidden]]
Sent: Tuesday, March 02, 2010 12:35 PM
To: Open MPI Users
Subject: Re: [OMPI users] Option to use only 7 cores out of 8 on each node

Am 02.03.2010 um 19:26 schrieb Eugene Loh:

> Eugene Loh wrote:
>
>> Addepalli, Srirangam V wrote:
>>
>>> i tried using the following syntax with machinefile
>>> mpirun -np 14 -npernode 7 -machinefile machinefile ven_nw.e
>>> <coll.dt5
>>
>> It "works" for me. I'm not using SGE, though.

When it's tightly integrated with SGE, maybe you need a PE with a
fixed allocation rule of 7. Then all should work automatically and
without any need of a machinefile for mpiexec. If you want to use the
node exclusively for your job although you want only 7 out of 8
available slots, you also need to request an exclusive resource (e.g.
named "exclusive") which is attached to each exechost.

-- Reuti

>>
>> % cat machinefile
>> % mpirun -tag-output -np 14 -npernode 7 -machinefile machinefile
>> hostname
>
> Incidentally, the key ingredient here is the "-npernode 7" part.
> The machine file only needs enough slots. E.g., you could have had:
>
> % cat machinefile
> node0 slots=20
> node1 slots=20
>
> mpirun will see that there are enough slots on each node, but load
> only 7 up per node due to the -npernode switch.
>
> That said, I don't know what's going wrong in your case -- only
> that things work as advertised for me.
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users