Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] trouble using openmpi under slurm
From: Gus Correa (gus_at_[hidden])
Date: 2010-07-08 09:43:48


Douglas Guptill wrote:
> On Wed, Jul 07, 2010 at 12:37:54PM -0600, Ralph Castain wrote:
>
>> No....afraid not. Things work pretty well, but there are places
>> where things just don't mesh. Sub-node allocation in particular is
>> an issue as it implies binding, and slurm and ompi have conflicting
>> methods.
>>
>> It all can get worked out, but we have limited time and nobody cares
>> enough to put in the effort. Slurm just isn't used enough to make it
>> worthwhile (too small an audience).
>
> I am about to get my first HPC cluster (128 nodes), and was
> considering slurm. We do use MPI.
>
> Should I be looking at Torque instead for a queue manager?
>
Hi Douglas

Yes, works like a charm along with OpenMPI.
I also have MVAPICH2 and MPICH2, no integration w/ Torque,
but no conflicts either.

My $0.02.
Gus Correa

> Suggestions appreciated,
> Douglas.