This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
No....afraid not. Things work pretty well, but there are places where things just don't mesh. Sub-node allocation in particular is an issue as it implies binding, and slurm and ompi have conflicting methods.
It all can get worked out, but we have limited time and nobody cares enough to put in the effort. Slurm just isn't used enough to make it worthwhile (too small an audience).
On Jul 7, 2010, at 12:32 PM, David Roundy wrote:
> Alas, I'm sorry to hear that! I had hoped (assumed?) that the slurm
> team would be hand-in-glove with the OMPI team in making sure the
> interface between the two is smooth. :(
> On Wed, Jul 7, 2010 at 11:09 AM, Ralph Castain <rhc_at_[hidden]> wrote:
>> Ah, if only it were that simple. Slurm is a very difficult beast to interface with, and I have yet to find a single, reliable marker across the various slurm releases to detect options we cannot support.
>> On Jul 7, 2010, at 11:59 AM, David Roundy wrote:
>>> On Wed, Jul 7, 2010 at 10:26 AM, Ralph Castain <rhc_at_[hidden]> wrote:
>>>> I'm afraid the bottom line is that OMPI simply doesn't support core-level allocations. I tried it on a slurm machine available to me, using our devel trunk as well as 1.4, with the same results.
>>>> Not sure why you are trying to run that way, but I'm afraid you can't do it with OMPI.
>>> Hmmm. I'm still trying to figure out how to configure slurm properly.
>>> I want it to be able to put one single-process job per core on each
>>> machine. I just now figured out that there is a slurm "-n" option. I
>>> had previously only been aware of the "-N" and "-c" options, and the
>>> latter was closer match. It looks like everything works fine with the
>>> "-n" option.
>>> However, wouldn't it be a good idea to avoid crashing when "-c 2" is
>>> used, e.g. by ignoring the environment variable SLURM_CPUS_PER_TASK?
>>> It seems like this would be an important feature to be able to use if
>>> one wanted to run mpi with multiple threads per node (as I've been
>>> known to do in the past).
>>> In my trouble shooting, I came up with the following script, which can
>>> reliably crash mpirun (when run without slurm, but obviously
>>> pretending to be running under slurm). :(
>>> set -ev
>>> export SLURM_JOBID=137
>>> export SLURM_TASKS_PER_NODE=1
>>> export SLURM_NNODES=1
>>> export SLURM_CPUS_PER_TASK=2
>>> export SLURM_NODELIST=localhost
>>> mpirun --display-devel-map echo hello world
>>> echo it worked
>>> users mailing list
>> users mailing list
> David Roundy
> users mailing list