Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Can not run a parallel job on all the nodes in the cluster
From: Hameed Alzahrani (ibn_aibaan_at_[hidden])
Date: 2012-03-28 10:30:49


I mean the node that I run mpirun command from. I use condor as a scheduler but I need to benchmark the cluster either from condor or directly from open MPI. when I ran mpirun from a machine and checking the memory status for the three machines that I have it appear that the memory usage increased just in the same machine.


> From: reuti_at_[hidden]
> Date: Wed, 28 Mar 2012 15:12:17 +0200
> To: users_at_[hidden]
> Subject: Re: [OMPI users] Can not run a parallel job on all the nodes in the cluster
> Hi,
> Am 27.03.2012 um 23:46 schrieb Hameed Alzahrani:
> > When I run any parallel job I get the answer just from the submitting node
> what do you mean by submitting node: you use a queuing system - which one?
> -- Reuti
> > even when I tried to benchmark the cluster using LINPACK but it look like the job just working on the submitting node is there a way to make openMPI send the job equally to all the nodes depending on the number of processor in the current mode even if I specify that the job should use 8 processor it look like openMPI use the submitting node 4 processors instead of using the other processors. I tried also --host but it does not work correctly in benchmarking the cluster so does any one use openMPI in benchmarking a cluster or does any one knows how to make openMPI divids the parallel job equally to every processor on the cluster.
> >
> > Regards,
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> >
> _______________________________________________
> users mailing list
> users_at_[hidden]