Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Resat Umit Payli (rupayli_at_[hidden])
Date: 2007-09-12 09:41:38


Hi;

I am not sure this is relavent to your question. I do have an computational
fluid dynamics application which solves fluid flow problems.

Recently I was able to run this code up to 2048 processors on the Indiana
University's IBM e1350 BigRed cluster. I did use 1.2.3 version of the
OpenMPI. I am happy about the preformance I did get. (I am using
point-to-point sendrecv in this code).

Thank you

On 9/12/07, Jeff Squyres <jsquyres_at_[hidden]> wrote:
>
> Cisco is not yet testing that large, but we plan to shortly start
> testing at np>=128 (I'm waiting for an internal cluster within Cisco
> to be setup properly).
>
>
> On Sep 11, 2007, at 5:31 PM, Rolf.Vandevaart_at_[hidden] wrote:
>
> >
> > I am curious which tests are being used when running tests on larger
> > clusters. And by larger clusters, I mean anything with np > 128.
> > (Although I realize that is not very large, but it is bigger than most
> > of the clusters I assume tests are being run on)
> > I ask this because I planned on using some of the intel tests, but
> > they
> > clearly have limitations starting at np=64.
> >
> > To avoid mailing list clutter, feel free to just email me and I will
> > summarize.
> >
> > Rolf
> >
> >
> > _______________________________________________
> > devel mailing list
> > devel_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>