Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI devel] OMPI & SLURM
From: Ralph Castain (rhc_at_[hidden])
Date: 2008-09-25 19:18:10


Yo all

Over the last few days, we at LANL have been working with our LLNL
counterparts on some OMPI/SLURM integration issues. In the course of
this work, we have learned that the meaning/use of the
SLURM_TASKS_PER_NODE environmental variable used by OMPI (and LAM-MPI
as well as others) to extract required allocation information was
changed beginning with SLURM 1.2, and the info we are seeking was
shifted to SLURM_JOB_CPUS_PER_NODE. Since SLURM is now on release
1.3.7 and above, this clearly occurred some time ago.

What I propose to do (per LLNL's recommendation) is modify the SLURM
ras module to check for SLURM_JOB_CPUS_PER_NODE first and use that
value if found - if not found, then fall back to using
SLURM_TASKS_PER_NODE. This will make us fully compatible with more
recent SLURM releases while retaining backward compatibility with pre-
SLURM 1.2 versions (assuming anyone out there is running something
that old).

Given that 1.2.8 and 1.3.0 have not yet been released, we (LANL) would
like to get this change into those releases. It is a minor code change
(I will insert it into trunk so people can see) and easily tested on
any SLURM machine.

Are there any objections/comments?

Ralph