Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres \(jsquyres\) (jsquyres_at_[hidden])
Date: 2006-04-26 14:38:01


You might want to take this question over to the Beowulf list -- they
talk a lot more about cluster configurations than we do -- and/or the
mm4 and wein2k support lists (since they know the details of those
applications -- if you're going to have a cluster for a specific set of
applications, it can be best to get input from the developers who know
the applications best, and what their communication characteristics
are).

> -----Original Message-----
> From: users-bounces_at_[hidden]
> [mailto:users-bounces_at_[hidden]] On Behalf Of hpc_at_[hidden]
> Sent: Wednesday, April 26, 2006 12:23 PM
> To: users_at_[hidden]
> Subject: [OMPI users] which is better: 64x1 or 32x2
>
> Hi,
>
> I want to build an hpc cluster for running mm5 and wien2k
> scientific applications for my physics coledge. both of them
> use MPI.
>
> Interconnection between nodes: GigEth (Cisco 24 port GigEth)
>
> It seems I have two choices for nodes:
> * 32 dual core opteron processors (1 GB ram for each node)
> * 64 single core opteron processors (2 GB ram for each node)
>
> Which is better (performance & price)?
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>