Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] multithreaded jobs
From: Dave Love (d.love_at_[hidden])
Date: 2013-04-30 10:52:52


Ralph Castain <rhc_at_[hidden]> writes:

> On Apr 25, 2013, at 5:33 PM, Vladimir Yamshchikov <yaximik_at_[hidden]> wrote:
>
>> $NSLOTS is what requested by -pe openmpi <ARG> in the script, my understanding that by default it is threads.

Is there something in the documentation
<http://arc.liv.ac.uk/SGE/htmlman/manuals.html> that suggests that? [It
currently incorrectly says processes, rather than slots, in at least one
place I'll fix.]

> What you want to do is:
>
> 1. request a number of slots = the number of application processes * the number of threads each process will run

[If really necessary, maybe use a job submission verifier to fiddle what
the user supplies.]

> 2. execute mpirun with the --cpus-per-proc N option, where N = the number of threads each process will run.
>
> This will ensure you have one core for each thread. Note, however,
> that we don't actually bind a thread to the core - so having more
> threads than there are cores on a socket can cause a thread to bounce
> across sockets and (therefore) potentially across NUMA regions.

Does that mean that binding is suppressed in that case, as opposed to
binding N cores per process, which is what I thought it did? (I can't
immediately test it.)

I don't understand the problem in this specific case which causes
over-subscription. However, if the program's runtime needs instruction,
you can do things like setting OMP_NUM_THREADS with an SGE JSV; see
archives of the gridengine list. (The SGE_BINDING variable that recent
SGE provides to the job can be converted to GOMP_CPU_AFFINITY etc., but
that's probably only useful for single-process jobs.)

There may be a case for OMPI to support this sort of thing for DRMs like
SGE which don't start the MPI processes themselves; you potentially need
to export the binding information per-process.

-- 
Community Grid Engine:  http://arc.liv.ac.uk/SGE/