Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Roadrunner blasts past the petaflop mark with Open MPI
From: Brock Palen (brockp_at_[hidden])
Date: 2008-06-16 22:46:35


Brad just curious.

Did you tweak any other values for starting and running a job on such
a large system? You say unmodified, but OpenMPI lets you tweak many
values at runtime.

I would be curious to expand what I know from what you discovered.

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
brockp_at_[hidden]
(734)936-1985

On Jun 16, 2008, at 10:12 PM, Brad Benton wrote:

> Greetings Open MPI users; we thought you'd be interested in the
> following announcement...
>
> A new supercomputer, powered by Open MPI, has broken the petaflop
> barrier to become the world's fastest supercomputer. The
> "Roadrunner" system was jointly developed by Los Alamos National
> Laboratories and IBM. Roadrunner's design uses a cluster of AMD
> dual-core processors coupled with computational accelerators based
> on the IBM Cell Broadband Engine. The cluster consists of 3,060
> nodes, each of which has 2 dual-core AMD processors and associated
> Cell accelerators. The AMD nodes are connected with 4x DDR
> InfiniBand links.
>
> Open MPI was used as the communications library for the 12,240
> processes comprising the Linpack run which broke the Petaflop
> barrier at 1.026 Petaflop/s. The version of Open MPI used in the
> run-for-record was a pre-release version of the upcoming 1.3
> release. Enhancements in this release include modifications for
> efficient, scalable process launch. As such, Open MPI was run
> unmodified from a snapshot of the pre-1.3 source base (meaning:
> there are no Roadrunner-specific enhancements that are unportable to
> other environments -- all Open MPI users benefit from the
> scalability and performance improvements contributed by the
> Roadrunner project).
>
> --Brad Benton
> Open MPI/Roadrunner Team
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users