Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] Openmpi with slurm : salloc -c
From: Damien Guinier (damien.guinier_at_[hidden])
Date: 2010-02-26 11:45:17


Hi Ralph,

I find a minor bug on the MCA composent: ras slurm.
This one have an incorrect comportement with the "X number of processors
per task" feature.
On the file orte/mca/ras/slurm/ras_slurm_module.c, line 356:
- The node slot number is divide with "cpus_per_task" information,
    but "cpus_per_task" information is already include by the line 285.
My proposition is to not divide the node slot number the seconde time.

My patch is :

diff -r ef9d639ab011 -r 8f62269014c2 orte/mca/ras/slurm/ras_slurm_module.c
--- a/orte/mca/ras/slurm/ras_slurm_module.c Wed Jan 20 18:29:12 2010
+0100
+++ b/orte/mca/ras/slurm/ras_slurm_module.c Thu Feb 25 15:59:41 2010
+0100
@@ -353,7 +353,8 @@
         node->state = ORTE_NODE_STATE_UP;
         node->slots_inuse = 0;
         node->slots_max = 0;
- node->slots = slots[i] / cpus_per_task;
+ node->slots = slots[i];
         opal_list_append(nodelist, &node->super);
     }
     free(slots);

Tanks
Damien