Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] Open MPI 2009 released
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2009-04-01 19:39:46

My wife thought it was frackin' brilliant. :)

Sent from my PDA. No type good.

----- Original Message -----
From: devel-bounces_at_[hidden] <devel-bounces_at_[hidden]>
To: Open MPI Developers <devel_at_[hidden]>
Sent: Wed Apr 01 18:58:55 2009
Subject: Re: [OMPI devel] Open MPI 2009 released

Bravo!! This is beautiful.
By far my favorite part is "Cobol (so say we all!)".
However, I question why ARM6 was targeted as opposed to ARM7 ;-)


George Bosilca wrote:
> The Open MPI Team, representing a consortium of bailed-out banks, car
> manufacturers, and insurance companies, is pleased to announce the
> release of the "unbreakable" / bug-free version Open MPI 2009,
> (expected to be available by mid-2011). This release is essentially a
> complete rewrite of Open MPI based on new technologies such as C#,
> Java, and object-oriented Cobol (so say we all!). Buffer overflows
> and memory leaks are now things of the past. We strongly recommend
> that all users upgrade to Windows 7 to fully take advantage of the new
> powers embedded in Open MPI.
> This version can be downloaded from the The Onion web site or from
> many BitTorrent networks (seeding now; the Open MPI ISO is
> approximately 3.97GB -- please wait for the full upload).
> Here is an abbreviated list of changes in Open MPI 2009 as compared to
> the previous version:
> - Dropped support for MPI 2 in favor of the newly enhanced MPI 11.7
> standard. MPI_COOK_DINNER support is only available with additional
> equipment (some assembly may be required). An experimental PVM-like
> API has been introduced to deal with the current limitations of the
> MPI 11.7 API.
> - Added a Twitter network transport capable of achieving peta-scale
> per second bandwidth (but only on useless data).
> - Dropped support for the barely-used x86 and x86_64 architectures in
> favor of the most recent ARM6 architecture. As a direct result,
> several Top500 sites are planning to convert from their now obsolete
> peta-scale machines to high-reliability iPhone clusters using the
> low-latency AT&T 3G network.
> - The iPhone iMPI app (powered by iOpen MPI) is now downloadable from
> the iTunes Store. Blackberry support will be included in a future
> release.
> - Fix all compiler errors related to the PGI 8.0 compiler by
> completely dropping support.
> - Add some "green" features for energy savings. The new "--bike"
> mpirun option will only run your parallel jobs only during the
> operation hours of the official Open MPI biking team. The
> "--preload-result" option will directly embed the final result in
> the parallel execution, leading to more scalable and reliable runs
> and decreasing the execution time of any parallel application under
> the real-time limit of 1 second. Open MPI is therefore EnergyStar
> compliant when used with these options.
> - In addition to moving Open MPI's lowest point-to-point transports to
> be an external project, limited support will be offered for
> industry-standard platforms. Our focus will now be to develop
> highly scalable transports based on widely distributed technologies
> such as SMTP, High Performance Gopher (v3.8 and later), OLE COMM,
> RSS/Atom, DNS, and Bonjour.
> - Opportunistic integration with Conflicker in order to utilize free
> resources distributed world-wide.
> - Support for all Fortran versions prior to Fortran 2020 has been
> dropped.
> Make today an Open MPI day!
> _______________________________________________
> devel mailing list
> devel_at_[hidden]

Paul H. Hargrove PHHargrove_at_[hidden]
Future Technologies Group Tel: +1-510-495-2352
HPC Research Department Fax: +1-510-486-6900
Lawrence Berkeley National Laboratory

devel mailing list