This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
FWIW, Open MPI works pretty well with SLURM; I use it back here at Cisco for all my testing. That one particular option you're testing doesn't seem to work, but all in all, the integration works fairly well.
On Jul 7, 2010, at 3:27 PM, Ralph Castain wrote:
> You'll get passionate advocates from all the various resource managers - there really isn't a right/wrong answer. Torque is more widely used, but any of them will do.
> None are perfect, IMHO.
> On Jul 7, 2010, at 1:16 PM, Douglas Guptill wrote:
>> On Wed, Jul 07, 2010 at 12:37:54PM -0600, Ralph Castain wrote:
>>> No....afraid not. Things work pretty well, but there are places
>>> where things just don't mesh. Sub-node allocation in particular is
>>> an issue as it implies binding, and slurm and ompi have conflicting
>>> It all can get worked out, but we have limited time and nobody cares
>>> enough to put in the effort. Slurm just isn't used enough to make it
>>> worthwhile (too small an audience).
>> I am about to get my first HPC cluster (128 nodes), and was
>> considering slurm. We do use MPI.
>> Should I be looking at Torque instead for a queue manager?
>> Suggestions appreciated,
>> Douglas Guptill voice: 902-461-9749
>> Research Assistant, LSC 4640 email: douglas.guptill_at_[hidden]
>> Oceanography Department fax: 902-494-3877
>> Dalhousie University
>> Halifax, NS, B3H 4J1, Canada
>> users mailing list
> users mailing list
For corporate legal information go to: