Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Roadrunner blasts past the petaflop mark with Open MPI
From: Durga Choudhury (dpchoudh_at_[hidden])
Date: 2010-06-23 10:20:58

Hi Brad/others

Sorry for waking this very stale thread, but I am researching the
prospects of CellBE based supercomputing and I found this old email a
promising lead.

My question is: what was the reason for choosing to mix an x86 based
AMD cores and PPC 970 based Cell? Was the Cell based computer custom
made or off-the-shelf? If the later, what was the brand? Is there any
more details on the design that Brad or someone else can provide? Was
the AMD CPU 32 bit or 64 bit (and was the OS 32 bit or 64 bit)? What
(distro) was the OS? (I assume it was Linux). What tool chain was used
for code development?

I have Jack's excellent tutorial from netlib, but as always, more info
is always better.

Best regards

On Mon, Jun 16, 2008 at 10:12 PM, Brad Benton <bradford.benton_at_[hidden]> wrote:
> Greetings Open MPI users; we thought you'd be interested in the
> following announcement...
> A new supercomputer, powered by Open MPI, has broken the petaflop
> barrier to become the world's fastest supercomputer.  The
> "Roadrunner" system was jointly developed by Los Alamos National
> Laboratories and IBM.  Roadrunner's design uses a cluster of AMD
> dual-core processors coupled with computational accelerators based
> on the IBM Cell Broadband Engine.  The cluster consists of 3,060
> nodes, each of which has 2 dual-core AMD processors and associated
> Cell accelerators.  The AMD nodes are connected with 4x DDR
> InfiniBand links.
> Open MPI was used as the communications library for the 12,240
> processes comprising the Linpack run which broke the Petaflop
> barrier at 1.026 Petaflop/s.  The version of Open MPI used in the
> run-for-record was a pre-release version of the upcoming 1.3
> release.  Enhancements in this release include modifications for
> efficient, scalable process launch.  As such, Open MPI was run
> unmodified from a snapshot of the pre-1.3 source base (meaning:
> there are no Roadrunner-specific enhancements that are unportable to
> other environments -- all Open MPI users benefit from the
> scalability and performance improvements contributed by the
> Roadrunner project).
> --Brad Benton
> Open MPI/Roadrunner Team
> _______________________________________________
> users mailing list
> users_at_[hidden]

Its a battle between humans and communists;
Which side are you in?