Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Tight integration and interactive sessions with SGE
From: Reuti (reuti_at_[hidden])
Date: 2008-11-13 03:46:55

Am 13.11.2008 um 05:41 schrieb Scott Beardsley:

> Reuti wrote:
>> qlogin will create a completely fresh bash, which is not aware of
>> running under SGE. Although you could set the SGE_* variables by
>> hand, it's easier to use an interactive session with:
> In the past we'd source some sge script and SLOTS, TMPDIR, etc were
> populated.

What do you mean by "in the past" - you upgraded SGE from version x
to version y? You can still source <execd-spool>/<nodename>/

>> $ qrsh -pe orte 4 /path/to/binary
>> If you really need a shell, you can get one with:
>> $ qrsh -pe orte 4 bash -il
> That breaks my shell (erase, history, tab-completion) but it seems
> to work other than that. Any suggestions on getting a unique list
> of nodes without touching them N times (N=# of slots assigned)? I
> guess I could do "mpirun uname -n|sort -u" but that just seems,
> well, wrong.

There is nothing stopping you to define a start/stop_proc_args script
anyway. You could use the example in $SGE_ROOT/mpi and then call this
script with -uniq which will give you a list in the usual $TMPDIR/
machines file. You can even create and remove the temporary
directories in this script(s), so you don't have to do it by hand
every time you run an interactive parallel job.

-- Reuti

BTW: To avoid saving of the history in the bash and/or sourcing the

MYPARENT=`ps -p $$ -o ppid --no-header`
MYSTARTUP=`ps -p $MYPARENT -o command --no-header`

if [ "${MYSTARTUP:0:13}" = "sge_shepherd-" ]; then
    echo "Running inside SGE"
    echo "Job $MYJOBID"

    . /usr/sge/default/spool/$HOSTNAME/active_jobs/$MYJOBID.1/
    unset HISTFILE # don't save the history

> In any case, thanks for the tips!
> Scott
> _______________________________________________
> users mailing list
> users_at_[hidden]