This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Thu, Jul 08, 2010 at 09:43:48AM -0400, Gus Correa wrote:
> Douglas Guptill wrote:
>> On Wed, Jul 07, 2010 at 12:37:54PM -0600, Ralph Castain wrote:
>>> No....afraid not. Things work pretty well, but there are places
>>> where things just don't mesh. Sub-node allocation in particular is
>>> an issue as it implies binding, and slurm and ompi have conflicting
>>> It all can get worked out, but we have limited time and nobody cares
>>> enough to put in the effort. Slurm just isn't used enough to make it
>>> worthwhile (too small an audience).
>> I am about to get my first HPC cluster (128 nodes), and was
>> considering slurm. We do use MPI.
>> Should I be looking at Torque instead for a queue manager?
> Hi Douglas
> Yes, works like a charm along with OpenMPI.
> I also have MVAPICH2 and MPICH2, no integration w/ Torque,
> but no conflicts either.
After some lurking and reading, I plan this:
+ fai - for compute-node operating system install
+ Torque - job scheduler/manager
+ MPI (Intel MPI) - for the application
+ MPI (OpenMP) - alternative MPI
Does anyone see holes in this plan?
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.guptill_at_[hidden]
Oceanography Department fax: 902-494-3877
Halifax, NS, B3H 4J1, Canada