Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Tim Prins (tprins_at_[hidden])
Date: 2007-09-26 18:26:25


Francesco,

I guess the first step would be to decide whether or not you want to upgrade.
All of the changes are listed below, if none of them effect you and your
current setup is working fine, I would not bother upgrading.

Also, assuming you installed from a tarball, there is no way that I know of
to 'upgrade' Open MPI in the strict sense of the word. Rather you have to
reinstall it.

That being said, if you do want to use the new version you have to decide
whether or not you want to replace your current installation of Open MPI or
to install the new version alongside the old version.

Replacing the old version with the new one is nice because it is simpler and
there is less to keep track of. However, we make no guarantees about binary
compatibility between releases (although we try to keep binary compatibility
between minor releases). So if you replace your installation of Open MPI, the
only completely safe thing to do would be to recompile all your applications
with the the new version.

So, if you have decided to keep your old version and add the new version, just
install Open MPI normally, but install it to a different prefix than the
other version. See http://www.open-mpi.org/faq/?category=building for
building instructions. You would then need to modify your PATH and
LD_LIBRARY_PATH to point to the installation you want to use (as shown in
http://www.open-mpi.org/faq/?category=running#run-prereqs). Alternatively you
could use something like Modules (http://modules.sourceforge.net/) or SoftEnv
(http://www-unix.mcs.anl.gov/systems/software/msys/) to manage multiple
installations.

If you want to replace your current installation of Open MPI, you have 3
options:
1. Install the new version exactly as you did the old version, overwriting the
old version. This should work, but can lead to problems if there are any
stale files left over. Thus I would recommend not doing this and doing one of
the following.

2. If you sill have the build tree you used to originally install Open MPI, go
into the build tree and type 'make uninstall'. This should remove all the old
Open MPI files and allow you to install the new version normally.

3. If you installed Open MPI into a unique prefix, such as /opt/openmpi, just
delete the directory and then install the new version of Open MPI.
Personally, I think that one should always install Open MPI into a directory
where nothing else is installed, as it makes management and upgrading
significantly easier.

Whatever path you take, remember the new installation must be available on all
the nodes in your cluster, and that different versions of Open MPI will
probably not work together. That is, you can't use 1.2.4 on the head node and
1.2.3 on the compute nodes.

I hope this helps. Let me know if you have any problems,

Tim

On Wednesday 26 September 2007 04:37:16 pm Francesco Pietra wrote:
> Are any detailed directions for upgrading (for common guys, not experts, I
> mean)? My 1.2.3 version on Debian Linux amd64 runs perfectly.
> Thanks
> francesco pietra
>
> --- Tim Mattox <timattox_at_[hidden]> wrote:
> > The Open MPI Team, representing a consortium of research, academic,
> > and industry partners, is pleased to announce the release of Open MPI
> > version 1.2.4. This release is mainly a bug fix release over the v1.2.3
> > release, but there are few new features. We strongly recommend
> > that all users upgrade to version 1.2.4 if possible.
> >
> > Version 1.2.4 can be downloaded from the main Open MPI web site or
> > any of its mirrors (mirrors will be updating shortly).
> >
> > Here are a list of changes in v1.2.4 as compared to v1.2.3:
> >
> > - Really added support for TotalView/DDT parallel debugger message queue
> > debugging (it was mistakenly listed as "added" in the 1.2 release).
> > - Fixed a build issue with GNU/kFreeBSD. Thanks to Petr Salinger for
> > the patch.
> > - Added missing MPI_FILE_NULL constant in Fortran. Thanks to
> > Bernd Schubert for bringing this to our attention.
> > - Change such that the UDAPL BTL is now only built in Linux when
> > explicitly specified via the --with-udapl configure command line
> > switch.
> > - Fixed an issue with umask not being propagated when using the TM
> > launcher.
> > - Fixed behavior if number of slots is not the same on all bproc nodes.
> > - Fixed a hang on systems without GPR support (ex. Cray XT3/4).
> > - Prevent users of 32-bit MPI apps from requesting >= 2GB of shared
> > memory.
> > - Added a Portals MTL.
> > - Fix 0 sized MPI_ALLOC_MEM requests. Thanks to Lisandro Dalcin for
> > pointing out the problem.
> > - Fixed a segfault crash on large SMPs when doing collectives.
> > - A variety of fixes for Cray XT3/4 class of machines.
> > - Fixed which error handler is used when MPI_COMM_SELF is passed
> > to MPI_COMM_FREE. Thanks to Lisandro Dalcini for the bug report.
> > - Fixed compilation on platforms that don't have hton/ntoh.
> > - Fixed a logic problem in the fortran binding for MPI_TYPE_MATCH_SIZE.
> > Thanks to Jeff Dusenberry for pointing out the problem and supplying
> > the fix.
> > - Fixed a problem with MPI_BOTTOM in various places of the f77-interface.
> > Thanks to Daniel Spangberg for bringing this up.
> > - Fixed problem where MPI-optional Fortran datatypes were not
> > correctly initialized.
> > - Fixed several problems with stdin/stdout forwarding.
> > - Fixed overflow problems with the sm mpool MCA parameters on large SMPs.
> > - Added support for the DDT parallel debugger via orterun's --debug
> > command line option.
> > - Added some sanity/error checks to the openib MCA parameter parsing
> > code.
> > - Updated the udapl BTL to use RDMA capabilities.
> > - Allow use of the BProc head node if it was allocated to the user.
> > Thanks to Sean Kelly for reporting the problem and helping debug it.
> > - Fixed a ROMIO problem where non-blocking I/O errors were not properly
> > reported to the user.
> > - Made remote process launch check the $SHELL environment variable if
> > a valid shell was not otherwise found for the user.
> > Thanks to Alf Wachsmann for the bugreport and suggested fix.
> > - Added/updated some vendor IDs for a few openib HCAs.
> > - Fixed a couple of failures that could occur when specifying devices
> > for use by the OOB.
> > - Removed dependency on sysfsutils from the openib BTL for
> > libibverbs >=v1.1 (i.e., OFED 1.2 and beyond).
> >
> > --
> > Tim Mattox
> > Open Systems Lab
> > Indiana University
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___________________________________________________________________________
>_________ Be a better Globetrotter. Get better travel answers from someone
> who knows. Yahoo! Answers - Check it out.
> http://answers.yahoo.com/dir/?link=list&sid=396545469
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users