This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Dirk Eddelbuettel wrote:
> On 3 April 2009 at 03:33, Jerome BENOIT wrote:
> | The above submission works the same on my clusters.
> | But in fact, my issue involve interconnection between the nodes of the clusters:
> | in the above examples involve no connection between nodes.
> | My cluster is a cluster of quadcore computers:
> | if in the sbatch script
> | #SBATCH --nodes=7
> | #SBATCH --ntasks=15
> | is replaced by
> | #SBATCH --nodes=1
> | #SBATCH --ntasks=4
> | everything is fine as no interconnection is involved.
> | Can you test the inconnection part of the story ?
> Again, think about in terms of layers. You have a problem with slurm on top
> of Open MPI.
> So before blaming Open MPI, I would try something like this:
> ~$ orterun -np 2 -H abc,xyz /tmp/jerome_hw
> Hello world! I am 1 of 2 and my name is `abc'
> Hello world! I am 0 of 2 and my name is `xyz'
I got it: I am very new with openmpi.
It is working with each nodes except one (`green'):
I have to blame my cluster.
I will try to fix it soon.
Thanks you very much for you help,
> ie whether the simple MPI example can be launched successfully on two nodes or not.