Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] help me please, about Open MPI
From: Ilmar Wilbers (ilmarw_at_[hidden])
Date: 2008-06-17 13:20:19


TMPDIR should be a single directory:
export TMPDIR=/tmp

It should not be a list of directories seperated by a ':'.

"setenv TMPDIR /tmp" (or "setenv TMPDIR
/local2/pbs/myname/37911.hpc-cluster") would hence be the correct line.

After setting the variable, try
'cd $TMPDIR'
'touch test'

to verify that you have writing permissions.

ilmar

Tony Smith wrote:
> thanks,
> I added "setenv TMPDIR /tmp:$TMPDIR" in my job script file.
>
> so , echo $TMPDIR :
>
> /tmp:/local2/pbs/myname/37911.hpc-cluster
>
> but the same errors.
>
> thanks
>
>
> > From: jsquyres_at_[hidden]
> > To: users_at_[hidden]
> > Date: Tue, 17 Jun 2008 11:55:05 -0400
> > Subject: Re: [OMPI users] help me please, about Open MPI
> >
> > Sure, that configure line should be fine. But that's a different
> > issue than the permissions on your temp directories. The environment
> > variables I was asking about are what are set at run time -- not when
> > you configure/build OMPI.
> >
> > If you're using Torque, I *believe* that it sets TMPDIR in its jobs --
> > you might want to check (I don't use Torque on a day-to-day basis, so
> > I don't remember offhand).
> >
> >
> > On Jun 17, 2008, at 11:52 AM, Tony Smith wrote:
> >
> > > thanks,
> > >
> > > My configure :
> > > ./configure --prefix=/ptmp/myname/openmpi --enable-static --disable-
> > > shared
> > > CC=icc CXX=icpc F77=ifort FC=ifort --with-gm=/opt/gm
> > > -with-tm=/usr/spool/PBS/
> > > is that correct ?
> > >
> > > thanks
> > >
> > >
> > > > From: jsquyres_at_[hidden]
> > > > To: users_at_[hidden]
> > > > Date: Tue, 17 Jun 2008 10:35:06 -0400
> > > > Subject: Re: [OMPI users] help me please, about Open MPI
> > > >
> > > > What are the exact permissions on /tmp? They should likely be 1777.
> > > >
> > > > Do you have the TMPDIR or TMP environment variables set? If so, is
> > > > that directory also world-writable? (if set, these will override the
> > > > default location of /tmp)
> > > >
> > > > The error you are seeing (session directory failed) usually has to
> > > do
> > > > with ORTE not being able to create a session directory for the user.
> > > >
> > > >
> > > >
> > > > On Jun 17, 2008, at 10:27 AM, Tony Smith wrote:
> > > >
> > > > > thanks,
> > > > > I changed /tmp and /ptmp and its sub directories to writable .
> > > > >
> > > > > But , the same errors.
> > > > >
> > > > > thanks,
> > > > >
> > > > >
> > > > > > From: jsquyres_at_[hidden]
> > > > > > To: users_at_[hidden]
> > > > > > Date: Tue, 17 Jun 2008 09:10:18 -0400
> > > > > > Subject: Re: [OMPI users] help me please, about Open MPI
> > > > > >
> > > > > > Is /tmp writable on your compute nodes?
> > > > > >
> > > > > >
> > > > > > On Jun 16, 2008, at 1:49 PM, Tony Smith wrote:
> > > > > >
> > > > > > > Dear Sir:
> > > > > > >
> > > > > > > thanks.
> > > > > > > I have changed it to its absolute path:
> > > > > > > /ptmp/myname/openmpi123/ompi123_install/bin/mpirun -np 8 /
> > > ptmp/
> > > > > > > myname/openmpi123/openmpi-1.2.3/examples/hello_c
> > > > > > >
> > > > > > > But I still got the error :
> > > > > > > ====================================================
> > > > > > > [hpc-cluster-38 :32635] [0,0,0] ORTE_ERROR_LOG: Error in file
> > > > > > >
> > > > > > > runtime/orte_init_stage1.c at line 626
> > > > > > >
> > > > >
> > >
> --------------------------------------------------------------------------
> > > > > > > It looks like orte_init failed for some reason; your parallel
> > > > > > > process is
> > > > > > > likely to abort. There are many reasons that a parallel
> > > process
> > > > > can
> > > > > > > fail during orte_init; some of which are due to
> > > configuration or
> > > > > > > environment problems. This failure appears to be an internal
> > > > > failure;
> > > > > > > here's some additional information (which may only be relevant
> > > > > to an
> > > > > > > Open MPI developer):
> > > > > > >
> > > > > > > orte_session_dir failed
> > > > > > > --> Returned value -1 instead of ORTE_SUCCESS
> > > > > > >
> > > > > > >
> > > > >
> > >
> --------------------------------------------------------------------------
> > > > > > > [hpc-cluster-38 :32635] [0,0,0] ORTE_ERROR_LOG: Error in file
> > > > > > >
> > > > > > > runtime/orte_system_init.c at line 42
> > > > > > > [hpc-cluster-38 :32635] [0,0,0] ORTE_ERROR_LOG: Error in file
> > > > > > >
> > > > > > > runtime/orte_init.c at line 52
> > > > > > >
> > > > >
> > >
> --------------------------------------------------------------------------
> > > > > > > Open RTE was unable to initialize properly. The error occured
> > > > > while
> > > > > > > attempting to orte_init(). Returned value -1 instead of
> > > > > ORTE_SUCCESS.
> > > > > > >
> > > > >
> > >
> --------------------------------------------------------------------------
> > > > > > >
> > > > > > > ====================================
> > > > > > >
> > > > > > > I can not find the file "runtime/orte_init_stage1.c"
> > > > > > >
> > > > > > > It seems that ORTE is not initialized .
> > > > > > > I have built OPEN MPI and installed it correctly .
> > > > > > >
> > > > > > > Why ?
> > > > > > >
> > > > > > > thanks a lot ,
> > > > > > >
> > > > > > > June 16 2008
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > Date: Mon, 16 Jun 2008 19:05:06 +0200
> > > > > > > > From: gentryx_at_[hidden]
> > > > > > > > To: users_at_[hidden]
> > > > > > > > Subject: Re: [OMPI users] help me please, about Open MPI
> > > > > > > >
> > > > > > > > Dear Mister Smith,
> > > > > > > >
> > > > > > > > Thank you for installing Open MPI.
> > > > > > > >
> > > > > > > > On 12:51 Mon 16 Jun , Tony Smith wrote:
> > > > > > > > > I have changed PATH and LD_LIBRARY_PATH:
> > > > > > > >
> > > > > > > > Please be aware that you have to make those changes within
> > > > > your job
> > > > > > > > script. Otherwise they will only affect your local shell.
> > > > > > > >
> > > > > > > > > But, how can I make sure that the application is run by
> > > Open
> > > > > MPI
> > > > > > > not by mpich-
> > > > > > > >
> > > > > > > > You could enforce a certain mpirun by using its absolute
> > > path,
> > > > > e.g
> > > > > > > > "/ptmp/myname/openmpi123/ompi123_install/bin/mpirun foo"
> > > > > instead of
> > > > > > > > just "mpirun foo".
> > > > > > > >
> > > > > > > > > I deleted /opt/mpich-gm/bin from PATH and added
> > > > > > > >
> > > > > > > > You should not need to delete, just add in front of MPICH.
> > > > > > > >
> > > > > > > > > Would you please help me with that ?
> > > > > > > >
> > > > > > > > I utterly hope I just did.
> > > > > > > >
> > > > > > > > Most sincerely yours ;-)
> > > > > > > > -Andreas
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > ============================================
> > > > > > > > Andreas Schäfer
> > > > > > > > Cluster and Metacomputing Working Group
> > > > > > > > Friedrich-Schiller-Universität Jena, Germany
> > > > > > > > PGP/GPG key via keyserver
> > > > > > > > I'm a bright... http://www.the-brights.net
> > > > > > > > ============================================
> > > > > > > >
> > > > > > > > (\___/)
> > > > > > > > (+'.'+)
> > > > > > > > (")_(")
> > > > > > > > This is Bunny. Copy and paste Bunny into your
> > > > > > > > signature to help him gain world domination!
> > > > > > >
> > > > > > > Now you can invite friends from Facebook and other groups to
> > > join
> > > > > > > you on Windows Live™ Messenger. Add them now!
> > > > > > > _______________________________________________
> > > > > > > users mailing list
> > > > > > > users_at_[hidden]
> > > > > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Jeff Squyres
> > > > > > Cisco Systems
> > > > > >
> > > > > >
> > > > > > _______________________________________________
> > > > > > users mailing list
> > > > > > users_at_[hidden]
> > > > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > > >
> > > > > Earn cashback on your purchases with Live Search - the search that
> > > > > pays you back! Learn More
> > > > > _______________________________________________
> > > > > users mailing list
> > > > > users_at_[hidden]
> > > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > >
> > > >
> > > > --
> > > > Jeff Squyres
> > > > Cisco Systems
> > > >
> > > >
> > > > _______________________________________________
> > > > users mailing list
> > > > users_at_[hidden]
> > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> > > Earn cashback on your purchases with Live Search - the search that
> > > pays you back! Learn More
> > > _______________________________________________
> > > users mailing list
> > > users_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > --
> > Jeff Squyres
> > Cisco Systems
> >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ------------------------------------------------------------------------
> Introducing Live Search cashback . It's search that pays you back! Try
> it Now
> <http://search.live.com/cashback/?&pkw=form=MIJAAF/publ=HMTGL/crea=introsrchcashback>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users