Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: damien_at_[hidden]
Date: 2006-04-26 15:02:27


A thing to look at is how much bandwidth the models require compared to
the CPU load. You can redline gigabit ethernet with a 1GHz PIII and a
64-bit PCI bus. Opterons on a decent motherboard will definitely keep a
gigabit line chock full. With dual-core you get the advantage of very
fast processor-to-processor communication but you'll run the risk of
choking on the ethernet connection. You might be OK if you can get
dual-ethernet connections on the motherboard and run channel-bonding to
increase the bandwidth, but your switch has to be able to handle it.

Damien

> You might want to take this question over to the Beowulf list -- they
> talk a lot more about cluster configurations than we do -- and/or the
> mm4 and wein2k support lists (since they know the details of those
> applications -- if you're going to have a cluster for a specific set of
> applications, it can be best to get input from the developers who know
> the applications best, and what their communication characteristics
> are).
>
>
>
>> -----Original Message-----
>> From: users-bounces_at_[hidden]
>> [mailto:users-bounces_at_[hidden]] On Behalf Of hpc_at_[hidden]
>> Sent: Wednesday, April 26, 2006 12:23 PM
>> To: users_at_[hidden]
>> Subject: [OMPI users] which is better: 64x1 or 32x2
>>
>> Hi,
>>
>> I want to build an hpc cluster for running mm5 and wien2k
>> scientific applications for my physics coledge. both of them
>> use MPI.
>>
>> Interconnection between nodes: GigEth (Cisco 24 port GigEth)
>>
>> It seems I have two choices for nodes:
>> * 32 dual core opteron processors (1 GB ram for each node)
>> * 64 single core opteron processors (2 GB ram for each node)
>>
>> Which is better (performance & price)?
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>