Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI jobs ending up in one node
From: Peter Teoh (htmldeveloper_at_[hidden])
Date: 2009-03-14 04:48:57


oops....sorry....it is in Intel MPI library. Thanks!!!

On Fri, Mar 13, 2009 at 9:47 PM, Ralph Castain <rhc_at_[hidden]> wrote:
> Hmmm...your comments don't sound like anything relating to Open MPI. Are you
> sure you are not using some other MPI?
>
> Our mpiexec isn't a script, for example, nor do we have anything named
> I_MPI_PIN_PROCESSOR_LIST in our code.
>
> :-)
>
> On Mar 13, 2009, at 4:00 AM, Peter Teoh wrote:
>
>> I saw the following problem posed somewhere - can anyone shed some
>> light?   Thanks.
>>
>> I have a cluster of 8-sock quad core systems running Redhat 5.2. It
>> seems that whenever I try to run multiple MPI jobs to a single node
>> all the jobs end up running on the same processors. For example, if I
>> were to submit 4 8-way jobs to a single box they all end up in CPUs 0
>> to 7, leaving 8 to 31 idle.
>>
>> I then tried all sorts of I_MPI_PIN_PROCESSOR_LIST combinations but
>> short of explicitly listing out the processors at each run, they all
>> end up still hanging on to CPUs 0-7. Browsing through the mpiexec
>> script, I realise that it is doing a taskset on each run.
>> As my jobs are all submitted through a scheduler (PBS in this case) I
>> cannot possibly know at job submission time which CPUs are not used.
>> So is there a simple way to tell mpiexec to set the taskset affinity
>> correctly at each run so that it will choose only the idle processors?
>> Thanks.
>>
>> --
>> Regards,
>> Peter Teoh
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Regards,
Peter Teoh